The linear Ising model and its analytic continuation, random walk
NASA Astrophysics Data System (ADS)
Lavenda, B. H.
2004-02-01
A generalization of Gauss's principle is used to derive the error laws corresponding to Types II and VII distributions in Pearson's classification scheme. Student's r-p.d.f. (Type II) governs the distribution of the internal energy of a uniform, linear chain, Ising model, while the analytic continuation of the uniform exchange energy converts it into a Student t-density (Type VII) for the position of a random walk in a single spatial dimension. Higher-dimensional spaces, corresponding to larger degrees of freedom and generalizations to multidimensional Student r- and t-densities, are obtained by considering independent and identically random variables, having rotationally invariant densities, whose entropies are additive and generating functions are multiplicative.
NOTE: Estimation of renal scintigraphy parameters using a linear piecewise-continuous model
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Zhang, L.; Koh, T. S.; Shuter, B.
2003-06-01
Instead of performing a numerical deconvolution, we propose to use a linear piecewise-continuous model of the renal impulse response function for parametric fitting of renal scintigraphy data, to obtain clinically useful renal parameters. The strengths of the present model are its simplicity and speed of computation, while not compromising on accuracy. Preliminary patient case studies show that the estimated parameters are in good agreement with a more elaborate model.
Tu, Yu-Kang
2015-02-01
Analysing continuous outcomes for network meta-analysis by means of linear mixed models is a great challenge, as it requires statistical software packages to specify special patterns of model error variance and covariance structure. This article demonstrates a non-Bayesian approach to network meta-analysis for continuous outcomes in periodontal research with a special focus on the adjustment of data dependency. Seventeen studies on guided tissue regeneration were used to illustrate how the proposed linear mixed models for network meta-analysis of continuous outcomes. Arm-based network meta-analysis use treatment arms from each study as the unit of analysis; when patients are randomly assigned to each arm, data are deemed independent and therefore no adjustment is required for multi-arm trials. Trial-based network meta-analysis use treatment contrasts as the unit of analysis, and therefore treatment contrasts within a multi-arm trial are not independent. This data dependency occurs also in split-mouth studies, and adjustments for data dependency are therefore required. Arm-based analysis is the preferred approach to network meta-analysis, when all included studies use the parallel group design and some compare more than two treatment arms. When included studies used designs that yield dependent data, the trial-based analysis is the preferred approach. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Bayesian recursive mixed linear model for gene expression analyses with continuous covariates.
Casellas, J; Ibáñez-Escriche, N
2012-01-01
The analysis of microarray gene expression data has experienced a remarkable growth in scientific research over the last few years and is helping to decipher the genetic background of several productive traits. Nevertheless, most analytical approaches have relied on the comparison of 2 (or a few) well-defined groups of biological conditions where the continuous covariates have no sense (e.g., healthy vs. cancerous cells). Continuous effects could be of special interest when analyzing gene expression in animal production-oriented studies (e.g., birth weight), although very few studies address this peculiarity in the animal science framework. Within this context, we have developed a recursive linear mixed model where not only are linear covariates accounted for during gene expression analyses but also hierarchized and the effects of their genetic, environmental, and residual components on differential gene expression inferred independently. This parameterization allows a step forward in the inference of differential gene expression linked to a given quantitative trait such as birth weight. The statistical performance of this recursive model was exemplified under simulation by accounting for different sample sizes (n), heritabilities for the quantitative trait (h(2)), and magnitudes of differential gene expression (λ). It is important to highlight that statistical power increased with n, h(2), and λ, and the recursive model exceeded the standard linear mixed model with linear (nonrecursive) covariates in the majority of scenarios. This new parameterization would provide new insights about gene expression in the animal science framework, opening a new research scenario where within-covariate sources of differential gene expression could be individualized and estimated. The source code of the program accommodating these analytical developments and additional information about practical aspects on running the program are freely available by request to the corresponding
NASA Astrophysics Data System (ADS)
BILLINGS, S. A.; LI, L. M.
2000-06-01
A new kernel invariance algorithm (KIA) is introduced to determine both the significant model terms and estimate the unknown parameters in non-linear continuous-time differential equation models of unknown systems
ERIC Educational Resources Information Center
Ferrando, Pere J.
2004-01-01
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
ERIC Educational Resources Information Center
Ferrando, Pere J.
2004-01-01
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum.
Rips, Ilya
2017-01-01
Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990)PLRAAN1050-294710.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988)PRLTAO0031-900710.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ/ω_{b}<0.26), which includes the turnover region. The dominant correction to the linear response theory result is associated with the "work function" and leads to reduction of the average energy loss and its dispersion. This reduction increases with the increasing dissipation strength
Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum
NASA Astrophysics Data System (ADS)
Rips, Ilya
2017-01-01
Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990), 10.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988), 10.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ /ωb<0.26 ), which includes the turnover region. The dominant correction to the linear response theory result is associated with the "work function" and leads to reduction of the average energy loss and its dispersion. This reduction increases with the increasing dissipation strength (up to ˜10 % ) within the
Development of a continuous linear model of a d-c to d-c flyback converter.
NASA Technical Reports Server (NTRS)
Wells, B. A.
1972-01-01
The analytical design of the feedback circuit for a d-c flyback converter requires the formulation of a model defining the static and dynamic performance of the forward loop. This paper describes the steps which were taken to develop a linear continuous model of a typical flyback circuit. Although the method uses several approximations to simplify the work, the resulting model was found to duplicate the performance of the actual circuit very closely. The model makes it possible to design the feedback circuit using well known linear feedback techniques. The method is an extension of prior work in the modeling of pulse-width controlled circuits.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J
2014-12-10
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk.
Hernández-Lloreda, María Victoria; Colmenares, Fernando; Martínez-Arias, Rosario
2004-09-01
In behavioral science, developmental discontinuities are thought to arise when the association between an outcome measure and the underlying process changes over time. Sudden changes in behavior across time are often taken to indicate that a reorganization in the outcome-process relationship may have occurred. The authors proposed in this article the use of piecewise hierarchical linear growth modeling as a statistical methodology to search for discontinuities in behavioral development and illustrated its possibilities by applying 2-piece hierarchical linear models to the study of developmental trajectories of baboon (Papio hamadryas) mothers' behavior during their infants' 1st year of life. The authors provided empirical evidence that piecewise growth modeling can be used to determine whether abrupt changes in development trajectories are tied to changes in the underlying process. ((c) 2004 APA, all rights reserved).
Raabe, Joshua K.; Gardner, Beth; Hightower, Joseph E.
2013-01-01
We developed a spatial capture–recapture model to evaluate survival and activity centres (i.e., mean locations) of tagged individuals detected along a linear array. Our spatially explicit version of the Cormack–Jolly–Seber model, analyzed using a Bayesian framework, correlates movement between periods and can incorporate environmental or other covariates. We demonstrate the model using 2010 data for anadromous American shad (Alosa sapidissima) tagged with passive integrated transponders (PIT) at a weir near the mouth of a North Carolina river and passively monitored with an upstream array of PIT antennas. The river channel constrained migrations, resulting in linear, one-dimensional encounter histories that included both weir captures and antenna detections. Individual activity centres in a given time period were a function of the individual’s previous estimated location and the river conditions (i.e., gage height). Model results indicate high within-river spawning mortality (mean weekly survival = 0.80) and more extensive movements during elevated river conditions. This model is applicable for any linear array (e.g., rivers, shorelines, and corridors), opening new opportunities to study demographic parameters, movement or migration, and habitat use.
Tan, Ziwen; Qin, Guoyou; Zhou, Haibo
2016-10-01
Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Gonçalves, Nuno R; Whelan, Robert; Foxe, John J; Lalor, Edmund C
2014-08-15
Noninvasive investigation of human sensory processing with high temporal resolution typically involves repeatedly presenting discrete stimuli and extracting an average event-related response from scalp recorded neuroelectric or neuromagnetic signals. While this approach is and has been extremely useful, it suffers from two drawbacks: a lack of naturalness in terms of the stimulus and a lack of precision in terms of the cortical response generators. Here we show that a linear modeling approach that exploits functional specialization in sensory systems can be used to rapidly obtain spatiotemporally precise responses to complex sensory stimuli using electroencephalography (EEG). We demonstrate the method by example through the controlled modulation of the contrast and coherent motion of visual stimuli. Regressing the data against these modulation signals produces spatially focal, highly temporally resolved response measures that are suggestive of specific activation of visual areas V1 and V6, respectively, based on their onset latency, their topographic distribution and the estimated location of their sources. We discuss our approach by comparing it with fMRI/MRI informed source analysis methods and, in doing so, we provide novel information on the timing of coherent motion processing in human V6. Generalizing such an approach has the potential to facilitate the rapid, inexpensive spatiotemporal localization of higher perceptual functions in behaving humans.
Linear models: permutation methods
Cade, B.S.; Everitt, B.S.; Howell, D.C.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
Zou, Kelly H.; O’Malley, A. James
2005-01-01
Receiver operating characteristic (ROC) analysis is a useful evaluative method of diagnostic accuracy. A Bayesian hierarchical nonlinear regression model for ROC analysis was developed. A validation analysis of diagnostic accuracy was conducted using prospective multi-center clinical trial prostate cancer biopsy data collected from three participating centers. The gold standard was based on radical prostatectomy to determine local and advanced disease. To evaluate the diagnostic performance of PSA level at fixed levels of Gleason score, a normality transformation was applied to the outcome data. A hierarchical regression analysis incorporating the effects of cluster (clinical center) and cancer risk (low, intermediate, and high) was performed, and the area under the ROC curve (AUC) was estimated. PMID:16161801
Memory in linear recurrent neural networks in continuous time.
Hermans, Michiel; Schrauwen, Benjamin
2010-04-01
Reservoir Computing is a novel technique which employs recurrent neural networks while circumventing difficult training algorithms. A very recent trend in Reservoir Computing is the use of real physical dynamical systems as implementation platforms, rather than the customary digital emulations. Physical systems operate in continuous time, creating a fundamental difference with the classic discrete time definitions of Reservoir Computing. The specific goal of this paper is to study the memory properties of such systems, where we will limit ourselves to linear dynamics. We develop an analytical model which allows the calculation of the memory function for continuous time linear dynamical systems, which can be considered as networks of linear leaky integrator neurons. We then use this model to research memory properties for different types of reservoir. We start with random connection matrices with a shifted eigenvalue spectrum, which perform very poorly. Next, we transform two specific reservoir types, which are known to give good performance in discrete time, to the continuous time domain. Reservoirs based on uniform spreading of connection matrix eigenvalues on the unit disk in discrete time give much better memory properties than reservoirs with random connection matrices, where reservoirs based on orthogonal connection matrices in discrete time are very robust against noise and their memory properties can be tuned. The overall results found in this work yield important insights into how to design networks for continuous time.
Continuous-mode operation of a noiseless linear amplifier
NASA Astrophysics Data System (ADS)
Li, Yi; Carvalho, André R. R.; James, Matthew R.
2016-05-01
We develop a dynamical model to describe the operation of the nondeterministic noiseless linear amplifier (NLA) in the regime of continuous-mode inputs. We analyze the dynamics conditioned on the detection of photons and show that the amplification gain depends on detection times and on the temporal profile of the input state and the auxiliary single-photon state required by the NLA. We also show that the output amplified state inherits the pulse shape of the ancilla photon.
Villante, F. L.; Ricci, B.
2010-05-01
We present a new approach to studying the properties of the Sun. We consider small variations of the physical and chemical properties of the Sun with respect to standard solar model predictions and we linearize the structure equations to relate them to the properties of the solar plasma. By assuming that the (variation of) present solar composition can be estimated from the (variation of) nuclear reaction rates and elemental diffusion efficiency in the present Sun, we obtain a linear system of ordinary differential equations which can be used to calculate the response of the Sun to an arbitrary modification of the input parameters (opacity, cross sections, etc.). This new approach is intended to be a complement to the traditional methods for solar model (SM) calculation and allows us to investigate in a more efficient and transparent way the role of parameters and assumptions in SM construction. We verify that these linear solar models recover the predictions of the traditional SMs with a high level of accuracy.
NASA Astrophysics Data System (ADS)
Tariqul Islam, Md.; Sturkell, Erik; Sigmundsson, Freysteinn; Drouin, Vincent Jean Paul B.; Ófeigsson, Benedikt G.
2014-05-01
Iceland is located on the mid Atlantic ridge, where the spreading rate is nearly 2 cm/yr. The high rate of magmatism in Iceland is caused by the interaction between the Iceland hotspot and the divergent mid-Atlantic plate boundary. Iceland hosts about 35 volcanoes or volcanic systems that are active. Most of these are aliened along the plate boundary. The best studied magma chamber of central volcanoes (e.g., Askja, Krafla, Grimsvötn, Katla) have verified (suggested) a shallow magma chamber (< 5 km), which has been model successfully with a Mogi source, using elastic and/or elastic-viscoelastic half-space. Maxwell and Newtonian viscosity is mainly considered for viscoelastic half-space. Therefore, rheology may be oversimplified. Our attempt is to study deformation of the Askja volcano together with plate spreading in Iceland using temperature-dependent non-linear rheology. It offers continuous variation of rheology, laterally and vertically from rift axis and surface. To implement it, we consider thermo-mechanic coupling models where rheology follows dislocation flow in dry condition based on a temperature distribution. Continuous deflation of the Askja volcanic system is associated with solidification of magma in the magma chamber and post eruption relaxation. A long time series of levelling data show its subsidence trend to exponentially. In our preliminary models, a magma chamber at 2.8 km depth with 0.5 km radius is introduced at the ridge axis as a Mogi source. Simultaneously far field of rift axis stretching by 18.4 mm/yr (measured during 2007 to 20013) is applied to reproduce plate spreading. Predicted surface deformation caused of combined effect of tectonic-volcanic activities is evaluated with GPS during 2003-2009 and RADARSAT InSAR data during 2000 to 2010. During 2003-2009, data from the GPS site OLAF (close to the centre of subsidence) shows average rate of subsidence 19±1 mm/yr relative to the ITRF2005 reference frame. The MASK (Mid ASKJA) site is
NASA Technical Reports Server (NTRS)
Cellier, Francois E.
1991-01-01
A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.
2016-04-01
Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Appendix C Verification Figures: UH-60 Stitched...Model 151 C .1 Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 C .2 Inertia...158 C .2.1 Roll Inertia
Discrete-time filtering of linear continuous-time processes
NASA Astrophysics Data System (ADS)
Shats, Samuel
1989-06-01
Continuous-time measurements are prefiltered before sampling, to remove additive white noise. The discrete-time optimal filter comprises a digital algorithm which is applied to the prefiltered, sampled measurements; the algorithm is based on the discrete-time equivalent model of the overall system. For the case of an integrate-and-dump analog prefilter, a discrete-time equivalent model was developed and the corresponding optimal filter was found for the general case, where the continuous-time measurement and process noise signals are correlated. A commonly used approximate discrete-time model was analyzed by defining and evaluating the true-error-covariance matrix of the estimate, and comparing it with the supposed error covariance matrix. It was shown that there is a class of unstable processes for which the former error covariance matrix attains unbounded norm, in spite of the continuing bounded nature of the other error covariance matrix. The main part of the thesis concerns the problem of finding an optimal prefilter. The steps of obtaining the optimal prefilter comprise: deriving a discrete-time equivalent-model of the overall system; finding the equation which is satisfied by the error covariance matrix; deriving the expressions which are satisfied by the first coefficients of the Maclaurin expansions of the error covariance matrix in the small parameter T; and obtaining the optimal prefilter by matrix optimization. The results obtained indicate that the optimal prefilter may be implemented through systems of different orders; the minimum order required is discussed, which is of great practical importance as the simplest possible prefilter. In discussion of the problem of discrete-time quadratic regulation of linear continuous time processes, the case of practical interest, where a zero-order hold is part of the digital-to-analog converter, is considered. It is shown that the duality between the regulation and filtering problems is not conserved after
Disformal invariance of continuous media with linear equation of state
NASA Astrophysics Data System (ADS)
Celoria, Marco; Matarrese, Sabino; Pilo, Luigi
2017-02-01
We show that the effective theory describing single component continuous media with a linear and constant equation of state of the form p=wρ is invariant under a 1-parameter family of continuous disformal transformations. In the special case of w=1/3 (ultrarelativistic gas), such a family reduces to conformal transformations. As examples, perfect fluids, irrotational dust (mimetic matter) and homogeneous and isotropic solids are discussed.
Linear optimal control of continuous time chaotic systems.
Merat, Kaveh; Abbaszadeh Chekan, Jafar; Salarieh, Hassan; Alasty, Aria
2014-07-01
In this research study, chaos control of continuous time systems has been performed by using dynamic programming technique. In the first step by crossing the response orbits with a selected Poincare section and subsequently applying linear regression method, the continuous time system is converted to a discrete type. Then, by solving the Riccati equation a sub-optimal algorithm has been devised for the obtained discrete chaotic systems. In the next step, by implementing the acquired algorithm on the quantized continuous time system, the chaos has been suppressed in the Rossler and AFM systems as some case studies.
Equivalent Linear Logistic Test Models.
ERIC Educational Resources Information Center
Bechger, Timo M.; Verstralen, Huub H. F. M.; Verhelst, Norma D.
2002-01-01
Discusses the Linear Logistic Test Model (LLTM) and demonstrates that there are many equivalent ways to specify a model. Analyzed a real data set (300 responses to 5 analogies) using a Lagrange multiplier test for the specification of the model, and demonstrated that there may be many ways to change the specification of an LLTM and achieve the…
Some estimation formulae for continuous time-invariant linear systems
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Sidhu, G. S.
1975-01-01
In this brief paper we examine a Riccati equation decomposition due to Reid and Lainiotis and apply the result to the continuous time-invariant linear filtering problem. Exploitation of the time-invariant structure leads to integration-free covariance recursions which are of use in covariance analyses and in filter implementations. A super-linearly convergent iterative solution to the algebraic Riccati equation (ARE) is developed. The resulting algorithm, arranged in a square-root form, is thought to be numerically stable and competitive with other ARE solution methods. Certain covariance relations that are relevant to the fixed-point and fixed-lag smoothing problems are also discussed.
Parameterized Linear Longitudinal Airship Model
NASA Technical Reports Server (NTRS)
Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph
2010-01-01
A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of
General quantum constraints on detector noise in continuous linear measurements
NASA Astrophysics Data System (ADS)
Miao, Haixing
2017-01-01
In quantum sensing and metrology, an important class of measurement is the continuous linear measurement, in which the detector is coupled to the system of interest linearly and continuously in time. One key aspect involved is the quantum noise of the detector, arising from quantum fluctuations in the detector input and output. It determines how fast we acquire information about the system and also influences the system evolution in terms of measurement backaction. We therefore often categorize it as the so-called imprecision noise and quantum backaction noise. There is a general Heisenberg-like uncertainty relation that constrains the magnitude of and the correlation between these two types of quantum noise. The main result of this paper is to show that, when the detector becomes ideal, i.e., at the quantum limit with minimum uncertainty, not only does the uncertainty relation takes the equal sign as expected, but also there are two new equalities. This general result is illustrated by using the typical cavity QED setup with the system being either a qubit or a mechanical oscillator. Particularly, the dispersive readout of a qubit state, and the measurement of mechanical motional sideband asymmetry are considered.
Foster, Guy M.; Graham, Jennifer L.
2016-04-06
The Kansas River is a primary source of drinking water for about 800,000 people in northeastern Kansas. Source-water supplies are treated by a combination of chemical and physical processes to remove contaminants before distribution. Advanced notification of changing water-quality conditions and cyanobacteria and associated toxin and taste-and-odor compounds provides drinking-water treatment facilities time to develop and implement adequate treatment strategies. The U.S. Geological Survey (USGS), in cooperation with the Kansas Water Office (funded in part through the Kansas State Water Plan Fund), and the City of Lawrence, the City of Topeka, the City of Olathe, and Johnson County Water One, began a study in July 2012 to develop statistical models at two Kansas River sites located upstream from drinking-water intakes. Continuous water-quality monitors have been operated and discrete-water quality samples have been collected on the Kansas River at Wamego (USGS site number 06887500) and De Soto (USGS site number 06892350) since July 2012. Continuous and discrete water-quality data collected during July 2012 through June 2015 were used to develop statistical models for constituents of interest at the Wamego and De Soto sites. Logistic models to continuously estimate the probability of occurrence above selected thresholds were developed for cyanobacteria, microcystin, and geosmin. Linear regression models to continuously estimate constituent concentrations were developed for major ions, dissolved solids, alkalinity, nutrients (nitrogen and phosphorus species), suspended sediment, indicator bacteria (Escherichia coli, fecal coliform, and enterococci), and actinomycetes bacteria. These models will be used to provide real-time estimates of the probability that cyanobacteria and associated compounds exceed thresholds and of the concentrations of other water-quality constituents in the Kansas River. The models documented in this report are useful for characterizing changes
Wealth redistribution in conservative linear kinetic models
NASA Astrophysics Data System (ADS)
Toscani, G.
2009-10-01
We introduce and discuss kinetic models for wealth distribution which include both taxation and uniform redistribution. The evolution of the continuous density of wealth obeys a linear Boltzmann equation where the background density represents the action of an external subject on the taxation mechanism. The case in which the mean wealth is conserved is analyzed in full details, by recovering the analytical form of the steady states. These states are probability distributions of convergent random series of a special structure, called perpetuities. Among others, Gibbs distribution appears as steady state in case of total taxation and uniform redistribution.
On high-continuity transfinite element formulations for linear-nonlinear transient thermal problems
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1987-01-01
This paper describes recent developments in the applicability of a hybrid transfinite element methodology with emphasis on high-continuity formulations for linear/nonlinear transient thermal problems. The proposed concepts furnish accurate temperature distributions and temperature gradients making use of a relatively smaller number of degrees of freedom; and the methodology is applicable to linear/nonlinear thermal problems. Characteristic features of the formulations are described in technical detail as the proposed hybrid approach combines the major advantages and modeling features of high-continuity thermal finite elements in conjunction with transform methods and classical Galerkin schemes. Several numerical test problems are evaluated and the results obtained validate the proposed concepts for linear/nonlinear thermal problems.
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Linear systems, and ARMA- and Fliess models
NASA Astrophysics Data System (ADS)
Lomadze, Vakhtang; Khurram Zafar, M.
2010-10-01
Linear (dynamical) systems are central objects of study (in linear system theory), and ARMA- and Fliess models are two very important classes of models that are used to represent them. This article is concerned with the question of what is a relation between them (in case of higher dimensions). It is shown that the category of linear systems, the 'weak' category of ARMA-models and the category of Fliess models are equivalent to each other.
Linear Logistic Test Modeling with R
ERIC Educational Resources Information Center
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Linear Logistic Test Modeling with R
ERIC Educational Resources Information Center
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Continuous Quantitative Measurements on a Linear Air Track
ERIC Educational Resources Information Center
Vogel, Eric
1973-01-01
Describes the construction and operational procedures of a spark-timing apparatus which is designed to record the back and forth motion of one or two carts on linear air tracks. Applications to measurements of velocity, acceleration, simple harmonic motion, and collision problems are illustrated. (CC)
Composite Linear Models | Division of Cancer Prevention
By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty examples from the literature. |
Continuous-variable entanglement distillation with noiseless linear amplification
NASA Astrophysics Data System (ADS)
Yang, Song; Zhang, ShengLi; Zou, XuBo; Bi, SiWen; Lin, XuLing
2012-12-01
Quantum entanglement distillation is a probabilistic process which protects entanglement from environment-induced decoherence. In this paper, we investigate the distillation of a continuousvariable optic entangled state with noiseless linear amplification (NLA). NLA schemes perform better than the conventional photon-subtraction-based distillation scheme, particularly in distributing entanglement over extremely low efficiency quantum channels. Finally, a comparison between the NLA-based scheme and the local squeezing-enhanced photon subtraction scheme is also investigated.
Arc-Tangent Circuit for Continuous Linear Output
NASA Technical Reports Server (NTRS)
Alhorn, Dean C. (Inventor); Howard, David E. (Inventor); Smith, Dennis A. (Inventor)
2000-01-01
A device suitable for determining arc-tangent of an angle theta is provided. Circuitry generates a first square wave at a frequency omega(t) and a second square wave at the frequency omega(t) but shifted by a phase difference equal to the angle theta. A pulse width modulation signal generator processes the first and second square waves to generate a pulse width modulation signal having a frequency of omega(t) and having a pulse width that is a function of the phase difference theta. The pulse width modulation signal is converted to a DC voltage that is a linear representation of the phase difference theta.
Arc-Tangent Circuit for Continuous Linear Output
NASA Technical Reports Server (NTRS)
Alhorn, Dean C. (Inventor); Howard, David E. (Inventor); Smith, Dennis A. (Inventor)
2000-01-01
A device suitable for determining arc-tangent of an angle theta is provided. Circuitry generates a first square wave at a frequency omega(t) and a second square wave at the frequency omega(t) but shifted by a phase difference equal to the angle theta. A pulse width modulation signal generator processes the first and second square waves to generate a pulse width modulation signal having a frequency of omega(t) and having a pulse width that is a function of the phase difference theta. The pulse width modulation signal is converted to a DC voltage that is a linear representation of the phase difference theta.
Generalization of continuous-variable quantum cloning with linear optics
Zhai Zehui; Guo Juan; Gao Jiangrui
2006-05-15
We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
Continuous-wave electron linear accelerators for industrial applications
NASA Astrophysics Data System (ADS)
Yurov, D. S.; Alimov, A. S.; Ishkhanov, B. S.; Shvedunov, V. I.
2017-04-01
Based on Skobeltsyn Institute of Nuclear Physics (SINP) Moscow State University (MSU) experience in developing continuous-wave (cw) normal conducting electron linacs, we propose a design for such accelerators with beam energy of up to 10 MeV and average beam power of up to several hundred kW. An example of such design is the 1 MeV industrial cw linac with maximum beam power of 25 kW achievable with 50 kW klystron, which was recently commissioned at SINP MSU.
Continuous time random walk with linear force applied to hydrated proteins
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2013-08-01
An integro-differential diffusion equation with linear force, based on the continuous time random walk model, is considered. The equation generalizes the ordinary and fractional diffusion equations. Analytical expressions for transition probability density, mean square displacement, and intermediate scattering function are presented. The mean square displacement and intermediate scattering function can fit well the simulation data of the temperature-dependent translational dynamics of nitrogen atoms of elastin for a wide range of temperatures and various scattering vectors. Moreover, the numerical results are also compared with those of a fractional diffusion equation.
Continuous time random walk with linear force applied to hydrated proteins.
Fa, Kwok Sau
2013-08-14
An integro-differential diffusion equation with linear force, based on the continuous time random walk model, is considered. The equation generalizes the ordinary and fractional diffusion equations. Analytical expressions for transition probability density, mean square displacement, and intermediate scattering function are presented. The mean square displacement and intermediate scattering function can fit well the simulation data of the temperature-dependent translational dynamics of nitrogen atoms of elastin for a wide range of temperatures and various scattering vectors. Moreover, the numerical results are also compared with those of a fractional diffusion equation.
Spaghetti Bridges: Modeling Linear Relationships
ERIC Educational Resources Information Center
Kroon, Cindy D.
2016-01-01
Mathematics and science are natural partners. One of many examples of this partnership occurs when scientific observations are made, thus providing data that can be used for mathematical modeling. Developing mathematical relationships elucidates such scientific principles. This activity describes a data-collection activity in which students employ…
Spaghetti Bridges: Modeling Linear Relationships
ERIC Educational Resources Information Center
Kroon, Cindy D.
2016-01-01
Mathematics and science are natural partners. One of many examples of this partnership occurs when scientific observations are made, thus providing data that can be used for mathematical modeling. Developing mathematical relationships elucidates such scientific principles. This activity describes a data-collection activity in which students employ…
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
Extended Generalized Linear Latent and Mixed Model
ERIC Educational Resources Information Center
Segawa, Eisuke; Emery, Sherry; Curry, Susan J.
2008-01-01
The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…
Classical Testing in Functional Linear Models
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155
Reasons for Hierarchical Linear Modeling: A Reminder.
ERIC Educational Resources Information Center
Wang, Jianjun
1999-01-01
Uses examples of hierarchical linear modeling (HLM) at local and national levels to illustrate proper applications of HLM and dummy variable regression. Raises cautions about the circumstances under which hierarchical data do not need HLM. (SLD)
Reasons for Hierarchical Linear Modeling: A Reminder.
ERIC Educational Resources Information Center
Wang, Jianjun
1999-01-01
Uses examples of hierarchical linear modeling (HLM) at local and national levels to illustrate proper applications of HLM and dummy variable regression. Raises cautions about the circumstances under which hierarchical data do not need HLM. (SLD)
Aircraft engine mathematical model - linear system approach
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Roateşi, Simona; Cîrciu, Ionicǎ
2016-06-01
This paper examines a simplified mathematical model of the aircraft engine, based on the theory of linear and nonlinear systems. The dynamics of the engine was represented by a linear, time variant model, near a nominal operating point within a finite time interval. The linearized equations were expressed in a matrix form, suitable for the incorporation in the MAPLE program solver. The behavior of the engine was included in terms of variation of the rotational speed following a deflection of the throttle. The engine inlet parameters can cover a wide range of altitude and Mach numbers.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
Dufour, F.; Piunovskiy, A. B.
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.
A Vernacular for Linear Latent Growth Models
ERIC Educational Resources Information Center
Hancock, Gregory R.; Choi, Jaehwa
2006-01-01
In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…
Mathematical Simulation of the Crystallization Process in a Continuous Linear Crystallizer
NASA Astrophysics Data System (ADS)
Veselov, S. N.; Volk, V. I.; Kashcheev, V. A.; Podymova, T. V.; Posenitskiy, E. A.
2017-01-01
A mathematical model of the crystallization of uranium in a continuous linear crystallizer, designed for the crystallization separation of desired products in the processing of an irradiated nuclear fuel, is proposed. This model defines the dynamics of growth/dissolution of uranyl nitrate hexahydrate crystals in a nitric acid solution of uranyl nitrate. Results of a numerical simulation of the indicated process, pointing to the existence of stationary conditions in the working space of the crystallizer, are presented. On the basis of these results, the characteristic time of establishment of the stationary regime at different parameters of the process was estimated. The mathematical model proposed was validated on the basis of a comparison of the results of calculations carried out within its framework with experimental data.
Dissipative Continuous Spontaneous Localization (CSL) model
NASA Astrophysics Data System (ADS)
Smirne, Andrea; Bassi, Angelo
2015-08-01
Collapse models explain the absence of quantum superpositions at the macroscopic scale, while giving practically the same predictions as quantum mechanics for microscopic systems. The Continuous Spontaneous Localization (CSL) model is the most refined and studied among collapse models. A well-known problem of this model, and of similar ones, is the steady and unlimited increase of the energy induced by the collapse noise. Here we present the dissipative version of the CSL model, which guarantees a finite energy during the entire system’s evolution, thus making a crucial step toward a realistic energy-conserving collapse model. This is achieved by introducing a non-linear stochastic modification of the Schrödinger equation, which represents the action of a dissipative finite-temperature collapse noise. The possibility to introduce dissipation within collapse models in a consistent way will have relevant impact on the experimental investigations of the CSL model, and therefore also on the testability of the quantum superposition principle.
Dissipative Continuous Spontaneous Localization (CSL) model
Smirne, Andrea; Bassi, Angelo
2015-01-01
Collapse models explain the absence of quantum superpositions at the macroscopic scale, while giving practically the same predictions as quantum mechanics for microscopic systems. The Continuous Spontaneous Localization (CSL) model is the most refined and studied among collapse models. A well-known problem of this model, and of similar ones, is the steady and unlimited increase of the energy induced by the collapse noise. Here we present the dissipative version of the CSL model, which guarantees a finite energy during the entire system’s evolution, thus making a crucial step toward a realistic energy-conserving collapse model. This is achieved by introducing a non-linear stochastic modification of the Schrödinger equation, which represents the action of a dissipative finite-temperature collapse noise. The possibility to introduce dissipation within collapse models in a consistent way will have relevant impact on the experimental investigations of the CSL model, and therefore also on the testability of the quantum superposition principle. PMID:26243034
Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.
ERIC Educational Resources Information Center
Ferrando, Pere J.
2002-01-01
Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)
Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.
ERIC Educational Resources Information Center
Ferrando, Pere J.
2002-01-01
Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)
Continuing evaluation of bipolar linear devices for total dose bias dependency and ELDRS effects
NASA Technical Reports Server (NTRS)
McClure, S. S.; Gorelick, J. J.; Yui, C. C.; Rax, B. G.; Wiedeman, M. D.
2003-01-01
We present results of continuing efforts to evaluate total dose bias dependency and ELDRS effects in bipolar linear microcircuits. Several devices were evaluated, each exhibiting moderate to significant bias and/or dose rate dependency.
Continuing evaluation of bipolar linear devices for total dose bias dependency and ELDRS effects
NASA Technical Reports Server (NTRS)
McClure, Steven S.; Gorelick, Jerry L.; Yui, Candice; Rax, Bernard G.; Wiedeman, Michael D.
2003-01-01
We present results of continuing efforts to evaluate total dose bias dependency and ELDRS effects in bipolar linear microcircuits. Several devices were evaluated, each exhibiting moderate to significant bias and/or dose rate dependency.
Semi-Parametric Generalized Linear Models.
1985-08-01
is nonsingular, upper triangular, and of full rank r. It is known (Dongarra et al., 1979) that G-1 FT is the Moore - Penrose inverse of L . Therefore... GENERALIZED LINEAR pq Mathematics Research Center University of Wisconsin-Madison 610 Walnut Street Madison, Wisconsin 53705 TI C August 1985 E T NOV 7 8...North Carolina 27709 -. -.. . - -.-. g / 6 O5’o UNIVERSITY OF WISCONSIN-MADISON MATHD4ATICS RESEARCH CENTER SD4I-PARAMETRIC GENERALIZED LINEAR MODELS
Congeneric Models and Levine's Linear Equating Procedures.
ERIC Educational Resources Information Center
Brennan, Robert L.
In 1955, R. Levine introduced two linear equating procedures for the common-item non-equivalent populations design. His procedures make the same assumptions about true scores; they differ in terms of the nature of the equating function used. In this paper, two parameterizations of a classical congeneric model are introduced to model the variables…
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
NASA Astrophysics Data System (ADS)
Brake, M. R.
2011-06-01
The analysis of continuous systems with piecewise-linear constraints in their domains have previously been limited to either numerical approaches, or analytical methods that are constrained in the parameter space, boundary conditions, or order of the system. The present analysis develops a robust method for studying continuous systems with arbitrary boundary conditions and discrete piecewise-linear constraints. A superposition method is used to generate homogeneous boundary conditions, and modal analysis is used to find the displacement of the system in each state of the piecewise-linear constraint. In order to develop a mapping across each slope discontinuity in the piecewise-linear force-deflection profile, a variational calculus approach is taken that minimizes the L 2 energy norm between the previous and current states. An approach for calculating the finite-time Lyapunov exponents is presented in order to determine chaotic regimes. To illustrate this method, two examples are presented: a pinned-pinned beam with a deadband constraint, and a leaf spring coupled with a connector pin immersed in a viscous fluid. The pinned-pinned beam example illustrates the method for a non-operator based analysis. Results are used to show that the present method does not necessitate the need of a large number of basis functions to adequately map the displacement and velocity of the system across states. In the second example, the leaf spring is modeled as a clamped-free beam. The interaction between the beam and the connector pin is modeled with a preload and a penalty stiffness. Several experiments are conducted in order to validate aspects of the leaf spring model. From the results of the convergence and parameter studies, a high correlation between the finite-time Lyapunov exponents and the contact time per period of the excitation is observed. The parameter studies also indicate that when the system's parameters are changed in order to reduce the magnitude of the impact
Managing clustered data using hierarchical linear modeling.
Warne, Russell T; Li, Yan; McKyer, E Lisako J; Condie, Rachel; Diep, Cassandra S; Murano, Peter S
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence assumption and lead to correct analysis of data, yet it is rarely used in nutrition research. The purpose of this viewpoint is to illustrate the benefits of hierarchical linear modeling within a nutrition research context. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Are all Linear Paired Comparison Models Equivalent
1990-09-01
Previous authors (Jackson and Fleckenstein 1957, Mosteller 1958, Noether 1960) have found that different models of paired comparisons data lead to simi...ponential distribution with a location parameter (Mosteller 1958, Noether 1960). Formal statements describing the limiting behavior of the gamma...that are not convolu- tion type linear models (the uniform model considered by Smith (1956), Mosteller (1958), Noether (1960)) and other convolution
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
Modelling female fertility traits in beef cattle using linear and non-linear models.
Naya, H; Peñagaricano, F; Urioste, J I
2017-06-01
Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h(2 ) < 0.08 and r < 0.13, for linear models; h(2 ) > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.
Piecewise-continuous observers for linear systems with sampled and delayed output
NASA Astrophysics Data System (ADS)
Wang, H. P.; Tian, Y.; Christov, N.
2016-06-01
The paper presents a new class of state observers for linear systems with sampled and delayed output measurements. These observers are derived using the theory of a particular class of hybrid systems called piecewise-continuous systems, and can be easily implemented. The performances of the piecewise-continuous observers are compared with the performances of state observers designed using the Lyapunov-Krasovskii techniques. A piecewise-continuous observer is designed and implemented to an experimental visual servoing platform.
NASA Astrophysics Data System (ADS)
Song, Il Young; Shin, Vladimir
2010-12-01
A new distributed receding horizon filtering algorithm for mixed continuous-discrete linear systems with different types of observations is proposed. The distributed fusion filter is formed by summation of the local receding horizon Kalman filters (LRHKFs) with matrix weights depending only on time instants. The proposed distributed filter has a parallel structure and allows parallel processing of measurements; thereby, it is more reliable than the centralized version if some sensors become faulty. Also, the selection of the receding horizon strategy makes the proposed distributed filter robust against dynamic model uncertainties. The key contribution of this paper is the derivation of the error cross-covariance equations between the LRHKFs in order to compute the optimal matrix weights. High accuracy and efficiency of the proposed distributed filter are demonstrated on the damper harmonic oscillator motion and the water tank mixing system.
Managing Clustered Data Using Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Managing Clustered Data Using Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Bayesian Methods for High Dimensional Linear Models
Mallick, Himel; Yi, Nengjun
2013-01-01
In this article, we present a selective overview of some recent developments in Bayesian model and variable selection methods for high dimensional linear models. While most of the reviews in literature are based on conventional methods, we focus on recently developed methods, which have proven to be successful in dealing with high dimensional variable selection. First, we give a brief overview of the traditional model selection methods (viz. Mallow’s Cp, AIC, BIC, DIC), followed by a discussion on some recently developed methods (viz. EBIC, regularization), which have occupied the minds of many statisticians. Then, we review high dimensional Bayesian methods with a particular emphasis on Bayesian regularization methods, which have been used extensively in recent years. We conclude by briefly addressing the asymptotic behaviors of Bayesian variable selection methods for high dimensional linear models under different regularity conditions. PMID:24511433
Oliver-Rodríguez, B; Zafra-Gómez, A; Reis, M S; Duarte, B P M; Verge, C; de Ferrer, J A; Pérez-Pascual, M; Vílchez, J L
2015-07-01
The behaviour of Linear Alkylbenzene Sulfonate (LAS) in agricultural soil is investigated in the laboratory using continuous-flow soil column studies in order to simultaneously analyze the three main underlying phenomena (adsorption/desorption, degradation and transport). The continuous-flow soil column experiments generated the breakthrough curves for each LAS homologue, C10, C11, C12 and C13, and by adding them up, for total LAS, from which the relevant retention, degradation and transport parameters could be estimated, after proposing adequate models. Several transport equations were considered, including the degradation of the sorbate in solution and its retention by soil, under equilibrium and non-equilibrium conditions between the sorbent and the sorbate. In general, the results obtained for the estimates of those parameters that were common to the various models studied (such as the isotherm slope, first order degradation rate coefficient and the hydrodynamic dispersion coefficient) were rather consistent, meaning that mass transfer limitations are not playing a major role in the experiments. These three parameters increase with the length of the LAS homologue chain. The study will provide the underlying conceptual framework and fundamental parameters to understand, simulate and predict the environmental behaviour of LAS compounds in agricultural soils.
Linear algebraic theory of partial coherence: continuous fields and measures of partial coherence.
Ozaktas, Haldun M; Gulcu, Talha Cihad; Alper Kutay, M
2016-11-01
This work presents a linear algebraic theory of partial coherence for optical fields of continuous variables. This approach facilitates use of linear algebraic techniques and makes it possible to precisely define the concepts of incoherence and coherence in a mathematical way. We have proposed five scalar measures for the degree of partial coherence. These measures are zero for incoherent fields, unity for fully coherent fields, and between zero and one for partially coherent fields.
Noiseless Linear Amplifiers in Entanglement-Based Continuous-Variable Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Zhang, Yichen; Li, Zhengyu; Weedbrook, Christian; Marshall, Kevin; Pirandola, Stefano; Yu, Song; Guo, Hong
2015-06-01
We propose a method to improve the performance of two entanglement-based continuous-variable quantum key distribution protocols using noiseless linear amplifiers. The two entanglement-based schemes consist of an entanglement distribution protocol with an untrusted source and an entanglement swapping protocol with an untrusted relay. Simulation results show that the noiseless linear amplifiers can improve the performance of these two protocols, in terms of maximal transmission distances, when we consider small amounts of entanglement, as typical in realistic setups.
NASA Astrophysics Data System (ADS)
Lima, Maurício Firmino Silva; Pessoa, Claudio; Pereira, Weber F.
We study a class of planar continuous piecewise linear vector fields with three zones. Using the Poincaré map and some techniques for proving the existence of limit cycles for smooth differential systems, we prove that this class admits at least two limit cycles that appear by perturbations of a period annulus. Moreover, we describe the bifurcation of the limit cycles for this class through two examples of two-parameter families of piecewise linear vector fields with three zones.
Switched linear model predictive controllers for periodic exogenous signals
NASA Astrophysics Data System (ADS)
Wang, Liuping; Gawthrop, Peter; Owens, David. H.; Rogers, Eric
2010-04-01
This article develops switched linear controllers for periodic exogenous signals using the framework of a continuous-time model predictive control. In this framework, the control signal is generated by an algorithm that uses receding horizon control principle with an on-line optimisation scheme that permits inclusion of operational constraints. Unlike traditional repetitive controllers, applying this method in the form of switched linear controllers ensures bumpless transfer from one controller to another. Simulation studies are included to demonstrate the efficacy of the design with or without hard constraints.
Non-linear memristor switching model
NASA Astrophysics Data System (ADS)
Chernov, A. A.; Islamov, D. R.; Pik'nik, A. A.
2016-10-01
We introduce a thermodynamical model of filament growing when a current pulse via memristor flows. The model is the boundary value problem, which includes nonstationary heat conduction equation with non-linear Joule heat source, Poisson equation, and Shockley- Read-Hall equations taking into account strong electron-phonon interactions in trap ionization and charge transport processes. The charge current, which defines the heating in the model, depends on the rate of the oxygen vacancy generation. The latter depends on the local temperature. The solution of the introduced problem allows one to describe the kinetics of the switch process and the final filament morphology.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.
Synaptic dynamics: linear model and adaptation algorithm.
Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W
2014-08-01
In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and
Ahlgren, André; Wirestam, Ronnie; Lind, Emelie; Ståhlberg, Freddy; Knutsson, Linda
2017-06-01
The partial volume effect (PVE) is an important source of bias in brain perfusion measurements. The impact of tissue PVEs in perfusion measurements with dynamic susceptibility contrast MRI (DSC-MRI) has not yet been well established. The purpose of this study was to suggest a partial volume correction (PVC) approach for DSC-MRI and to study how PVC affects DSC-MRI perfusion results. A linear mixed perfusion model for DSC-MRI was derived and evaluated by way of simulations. Twenty healthy volunteers were scanned twice, including DSC-MRI, arterial spin labeling (ASL), and partial volume measurements. Two different algorithms for PVC were employed and assessed. Simulations showed that the derived model had a tendency to overestimate perfusion values in voxels with high fractions of cerebrospinal fluid. PVC reduced the tissue volume dependence of DSC-MRI perfusion values from 44.4% to 4.2% in gray matter and from 55.3% to 14.2% in white matter. One PVC method significantly improved the voxel-wise repeatability, but PVC did not improve the spatial agreement between DSC-MRI and ASL perfusion maps. Significant PVEs were found for DSC-MRI perfusion estimates, and PVC successfully reduced those effects. The findings suggest that PVC might be an important consideration for DSC-MRI applications. Magn Reson Med 77:2203-2214, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
User's manual for LINEAR, a FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.
1987-01-01
This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Modeling Continuous IED Supply Chains
2014-03-27
1839609/. [6] Ackleh, Hearfott, Allen and Seshaiyer. Classical and Modern Numerical Analysis. Chapman and Hall, Boca Raton, Florida, 2010. [7] Arney, David...Briscoe, Erica, Ethan Trewhitt, Lora Weiss, and Elizabeth Whitaker. Modeling Behavioral Activities Related to Deploying IEDs in Iraq. Technical report
Nonlinear damping and quasi-linear modelling.
Elliott, S J; Ghandchi Tehrani, M; Langley, R S
2015-09-28
The mechanism of energy dissipation in mechanical systems is often nonlinear. Even though there may be other forms of nonlinearity in the dynamics, nonlinear damping is the dominant source of nonlinearity in a number of practical systems. The analysis of such systems is simplified by the fact that they show no jump or bifurcation behaviour, and indeed can often be well represented by an equivalent linear system, whose damping parameters depend on the form and amplitude of the excitation, in a 'quasi-linear' model. The diverse sources of nonlinear damping are first reviewed in this paper, before some example systems are analysed, initially for sinusoidal and then for random excitation. For simplicity, it is assumed that the system is stable and that the nonlinear damping force depends on the nth power of the velocity. For sinusoidal excitation, it is shown that the response is often also almost sinusoidal, and methods for calculating the amplitude are described based on the harmonic balance method, which is closely related to the describing function method used in control engineering. For random excitation, several methods of analysis are shown to be equivalent. In general, iterative methods need to be used to calculate the equivalent linear damper, since its value depends on the system's response, which itself depends on the value of the equivalent linear damper. The power dissipation of the equivalent linear damper, for both sinusoidal and random cases, matches that dissipated by the nonlinear damper, providing both a firm theoretical basis for this modelling approach and clear physical insight. Finally, practical examples of nonlinear damping are discussed: in microspeakers, vibration isolation, energy harvesting and the mechanical response of the cochlea.
Continuous Time Dynamic Topic Models
2008-06-20
called topics, can be used to explain the observed collection. LDA is a probabilistic extension of latent semantic indexing (LSI) [5] and probabilistic... latent semantic indexing (pLSI) [11]. Owing to its formal generative semantics, LDA has been extended and applied to authorship [19], email [15...Steyvers. Probabilistic topic models. In Latent Semantic Analysis: A Road to Meaning. 2006. [9] T. L. Griffiths and M. Steyvers. Finding scientific
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Neural Network Hydrological Modelling: Linear Output Activation Functions?
NASA Astrophysics Data System (ADS)
Abrahart, R. J.; Dawson, C. W.
2005-12-01
The power to represent non-linear hydrological processes is of paramount importance in neural network hydrological modelling operations. The accepted wisdom requires non-polynomial activation functions to be incorporated in the hidden units such that a single tier of hidden units can thereafter be used to provide a 'universal approximation' to whatever particular hydrological mechanism or function is of interest to the modeller. The user can select from a set of default activation functions, or in certain software packages, is able to define their own function - the most popular options being logistic, sigmoid and hyperbolic tangent. If a unit does not transform its inputs it is said to possess a 'linear activation function' and a combination of linear activation functions will produce a linear solution; whereas the use of non-linear activation functions will produce non-linear solutions in which the principle of superposition does not hold. For hidden units, speed of learning and network complexities are important issues. For the output units, it is desirable to select an activation function that is suited to the distribution of the target values: e.g. binary targets (logistic); categorical targets (softmax); continuous-valued targets with a bounded range (logistic / tanh); positive target values with no known upper bound (exponential; but beware of overflow); continuous-valued targets with no known bounds (linear). It is also standard practice in most hydrological applications to use the default software settings and to insert a set of identical non-linear activation functions in the hidden layer and output layer processing units. Mixed combinations have nevertheless been reported in several hydrological modelling papers and the full ramifications of such activities requires further investigation and assessment i.e. non-linear activation functions in the hidden units connected to linear or clipped-linear activation functions in the output unit. There are two
B-737 Linear Autoland Simulink Model
NASA Technical Reports Server (NTRS)
Belcastro, Celeste (Technical Monitor); Hogge, Edward F.
2004-01-01
The Linear Autoland Simulink model was created to be a modular test environment for testing of control system components in commercial aircraft. The input variables, physical laws, and referenced frames used are summarized. The state space theory underlying the model is surveyed and the location of the control actuators described. The equations used to realize the Dryden gust model to simulate winds and gusts are derived. A description of the pseudo-random number generation method used in the wind gust model is included. The longitudinal autopilot, lateral autopilot, automatic throttle autopilot, engine model and automatic trim devices are considered as subsystems. The experience in converting the Airlabs FORTRAN aircraft control system simulation to a graphical simulation tool (Matlab/Simulink) is described.
Comparing the Discrete and Continuous Logistic Models
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2008-01-01
The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)
Comparing the Discrete and Continuous Logistic Models
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2008-01-01
The solutions of the discrete logistic growth model based on a difference equation and the continuous logistic growth model based on a differential equation are compared and contrasted. The investigation is conducted using a dynamic interactive spreadsheet. (Contains 5 figures.)
Log-Linear Models for Gene Association
Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.
2009-01-01
We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032
Modeling pan evaporation for Kuwait by multiple linear regression.
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values.
ATOPS B-737 inner-loop control system linear model construction and verification
NASA Technical Reports Server (NTRS)
Broussard, J. R.
1983-01-01
Nonlinear models and block diagrams of an inner-loop control system for the ATOPS B-737 Research Aircraft are presented. Continuous time linear model representations of the nonlinear inner-loop control systems are derived. Closed-loop aircraft simulations comparing nonlinear and linear dynamic responses to step inputs are used to verify the inner-loop control system models.
User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.
1988-01-01
An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Running vacuum cosmological models: linear scalar perturbations
NASA Astrophysics Data System (ADS)
Perico, E. L. D.; Tamayo, D. A.
2017-08-01
In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ(H2) or Λ(R). Such models assume an equation of state for the vacuum given by bar PΛ = - bar rhoΛ, relating its background pressure bar PΛ with its mean energy density bar rhoΛ ≡ Λ/8πG. This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely bar rhoΛ = Σibar rhoΛi. Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ(H2) scenario the vacuum is coupled with every matter component, whereas the Λ(R) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.
Estimating population trends with a linear model
Bart, J.; Collins, B.; Morrison, R.I.G.
2003-01-01
We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.
The Piecewise Linear Reactive Flow Rate Model
Vitello, P; Souers, P C
2005-07-22
Conclusions are: (1) Early calibrations of the Piece Wise Linear reactive flow model have shown that it allows for very accurate agreement with data for a broad range of detonation wave strengths. (2) The ability to vary the rate at specific pressures has shown that corner turning involves competition between the strong wave that travels roughly in a straight line and growth at low pressure of a new wave that turns corners sharply. (3) The inclusion of a low pressure de-sensitization rate is essential to preserving the dead zone at large times as is observed.
The Piece Wise Linear Reactive Flow Model
Vitello, P; Souers, P C
2005-08-18
For non-ideal explosives a wide range of behavior is observed in experiments dealing with differing sizes and geometries. A predictive detonation model must be able to reproduce many phenomena including such effects as: variations in the detonation velocity with the radial diameter of rate sticks; slowing of the detonation velocity around gentle corners; production of dead zones for abrupt corner turning; failure of small diameter rate sticks; and failure for rate sticks with sufficiently wide cracks. Most models have been developed to explain one effect at a time. Often, changes are made in the input parameters used to fit each succeeding case with the implication that this is sufficient for the model to be valid over differing regimes. We feel that it is important to develop a model that is able to fit experiments with one set of parameters. To address this we are creating a new generation of models that are able to produce better fitting to individual data sets than prior models and to simultaneous fit distinctly different regimes of experiments. Presented here are details of our new Piece Wise Linear reactive flow model applied to LX-17.
Solving linear integer programming problems by a novel neural model.
Cavalieri, S
1999-02-01
The paper deals with integer linear programming problems. As is well known, these are extremely complex problems, even when the number of integer variables is quite low. Literature provides examples of various methods to solve such problems, some of which are of a heuristic nature. This paper proposes an alternative strategy based on the Hopfield neural network. The advantage of the strategy essentially lies in the fact that hardware implementation of the neural model allows for the time required to obtain a solution so as not depend on the size of the problem to be solved. The paper presents a particular class of integer linear programming problems, including well-known problems such as the Travelling Salesman Problem and the Set Covering Problem. After a brief description of this class of problems, it is demonstrated that the original Hopfield model is incapable of supplying valid solutions. This is attributed to the presence of constant bias currents in the dynamic of the neural model. A demonstration of this is given and then a novel neural model is presented which continues to be based on the same architecture as the Hopfield model, but introduces modifications thanks to which the integer linear programming problems presented can be solved. Some numerical examples and concluding remarks highlight the solving capacity of the novel neural model.
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Ira Remsen, saccharin, and the linear model.
Warner, Deborah J
2008-03-01
While working in the chemistry laboratory at Johns Hopkins University, Constantin Fahlberg oxidized the 'ortho-sulfamide of benzoic acid' and, by chance, found the result to be incredibly sweet. Several years later, now working on his own, he termed this stuff saccharin, developed methods of making it in quantity, obtained patents on these methods, and went into production. As the industrial and scientific value of saccharin became apparent, Ira Remsen pointed out that the initial work had been done in his laboratory and at his suggestion. The ensuing argument, carried out in the courts of law and public opinion, illustrates the importance of the linear model to scientists who staked their identities on the model of disinterested research but who also craved credit for important practical results.
NASA Astrophysics Data System (ADS)
Zhang, Yichen; Yu, Song; Guo, Hong
2015-11-01
We propose a modified no-switching continuous-variable quantum key distribution protocol by employing a practical noiseless linear amplifier at the receiver to increase the maximal transmission distance and tolerable excess noise. A security analysis is presented to derive the secure bound of the protocol in presence of a Gaussian noisy lossy channel. Simulation results show that the modified protocol can not only transmit longer distance and tolerate more channel excess noise than the original protocol, but also distribute more secure keys in the enhanced region where we define a critical point to separate the enhanced and degenerative region. This critical point presents the condition of using a practical noiseless linear amplifier in the no-switching continuous-variable quantum cryptography, which is meaningful and instructive to implement a practical experiment.
Determining the continuous family of quantum Fisher information from linear-response theory
NASA Astrophysics Data System (ADS)
Shitara, Tomohiro; Ueda, Masahito
2016-12-01
The quantum Fisher information represents a continuous family of metrics on the space of quantum states and places the fundamental limit on the accuracy of quantum state estimation. We show that the entire family of quantum Fisher information can be determined from linear-response theory through generalized covariances. We derive the generalized fluctuation-dissipation theorem that relates linear-response functions to generalized covariances and hence allows us to determine the quantum Fisher information from linear-response functions, which are experimentally measurable quantities. As an application, we examine the skew information, which is a quantum Fisher information, of a harmonic oscillator in thermal equilibrium, and show that the equality of the skew-information-based uncertainty relation holds.
Monitoring acute effects on athletic performance with mixed linear modeling.
Vandenbogaerde, Tom J; Hopkins, Will G
2010-07-01
There is a need for a sophisticated approach to track athletic performance and to quantify factors affecting it in practical settings. To demonstrate the application of mixed linear modeling for monitoring athletic performance. Elite sprint and middle-distance swimmers (three females and six males; aged 21-26 yr) performed 6-13 time trials in training and competition in the 9 wk before and including Olympic-qualifying trials, all in their specialty event. We included a double-blind, randomized, diet-controlled crossover intervention, in which the swimmers consumed caffeine (5 mg x kg(-1) body mass) or placebo. The swimmers also knowingly consumed varying doses of caffeine in some time trials. We used mixed linear modeling of log-transformed swim time to quantify effects on performance in training versus competition, in morning versus evening swims, and with use of caffeine. Predictor variables were coded as 0 or 1 to represent absence or presence, respectively, of each condition and were included as fixed effects. The date of each performance test was included as a continuous linear fixed effect and interacted with the random effect for the athlete to represent individual differences in linear trends in performance. Most effects were clear, owing to the high reliability of performance times in training and competition (typical errors of 0.9% and 0.8%, respectively). Performance time improved linearly by 0.8% per 4 wk. The swimmers performed substantially better in evenings versus mornings and in competition versus training. A 100-mg dose of caffeine enhanced performance in training and competition by approximately 1.3%. There were substantial but unclear individual responses to training and caffeine (SD of 0.3% and 0.8%, respectively). Mixed linear modeling can be applied successfully to monitor factors affecting performance in a squad of elite athletes.
Modeling patterns in data using linear and related models
Engelhardt, M.E.
1996-06-01
This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models.
A study on the fabrication of main scale of linear encoder using continuous roller imprint method
NASA Astrophysics Data System (ADS)
Fan, Shanjin; Shi, Yongsheng; Yin, Lei; Feng, Long; Liu, Hongzhong
2013-10-01
Linear encoder composed of main and index scales has an extensive application in the field of modern precision measurement. The main scale is the key component of linear encoder as measuring basis. In this article, the continuous roller imprint technology is applied to the manufacturing of the main scale, this method can realize the high efficiency and low cost manufacturing of the ultra-long main scale. By means of the plastic deformation of the soft metal film substrate, the grating microstructure on the surface of the cylinder mold is replicated to the soft metal film substrate directly. Through the high precision control of continuous rotational motion of the mold, ultra-long high precision grating microstructure is obtained. This paper mainly discusses the manufacturing process of the high precision cylinder mold and the effects of the roller imprint pressure and roller rotation speed on the imprint replication quality. The above process parameters were optimized to manufacture the high quality main scale. At last, the reading test of a linear encoder contains the main scale made by the above method was conducted to evaluate its measurement accuracy, the result demonstrated the feasibility of the continuous roller imprint method.
A continuous linear optimal transport approach for pattern analysis in image datasets
Kolouri, Soheil; Tosun, Akif B.; Ozolek, John A.; Rohde, Gustavo K.
2015-01-01
We present a new approach to facilitate the application of the optimal transport metric to pattern recognition on image databases. The method is based on a linearized version of the optimal transport metric, which provides a linear embedding for the images. Hence, it enables shape and appearance modeling using linear geometric analysis techniques in the embedded space. In contrast to previous work, we use Monge's formulation of the optimal transport problem, which allows for reasonably fast computation of the linearized optimal transport embedding for large images. We demonstrate the application of the method to recover and visualize meaningful variations in a supervised-learning setting on several image datasets, including chromatin distribution in the nuclei of cells, galaxy morphologies, facial expressions, and bird species identification. We show that the new approach allows for high-resolution construction of modes of variations and discrimination and can enhance classification accuracy in a variety of image discrimination problems. PMID:26858466
Numerical linearized MHD model of flapping oscillations
NASA Astrophysics Data System (ADS)
Korovinskiy, D. B.; Ivanov, I. B.; Semenov, V. S.; Erkaev, N. V.; Kiehas, S. A.
2016-06-01
Kink-like magnetotail flapping oscillations in a Harris-like current sheet with earthward growing normal magnetic field component Bz are studied by means of time-dependent 2D linearized MHD numerical simulations. The dispersion relation and two-dimensional eigenfunctions are obtained. The results are compared with analytical estimates of the double-gradient model, which are found to be reliable for configurations with small Bz up to values ˜ 0.05 of the lobe magnetic field. Coupled with previous results, present simulations confirm that the earthward/tailward growth direction of the Bz component acts as a switch between stable/unstable regimes of the flapping mode, while the mode dispersion curve is the same in both cases. It is confirmed that flapping oscillations may be triggered by a simple Gaussian initial perturbation of the Vz velocity.
On Discontinuous Piecewise Linear Models for Memristor Oscillators
NASA Astrophysics Data System (ADS)
Amador, Andrés; Freire, Emilio; Ponce, Enrique; Ros, Javier
2017-06-01
In this paper, we provide for the first time rigorous mathematical results regarding the rich dynamics of piecewise linear memristor oscillators. In particular, for each nonlinear oscillator given in [Itoh & Chua, 2008], we show the existence of an infinite family of invariant manifolds and that the dynamics on such manifolds can be modeled without resorting to discontinuous models. Our approach provides topologically equivalent continuous models with one dimension less but with one extra parameter associated to the initial conditions. It is possible to justify the periodic behavior exhibited by three-dimensional memristor oscillators, by taking advantage of known results for planar continuous piecewise linear systems. The analysis developed not only confirms the numerical results contained in previous works [Messias et al., 2010; Scarabello & Messias, 2014] but also goes much further by showing the existence of closed surfaces in the state space which are foliated by periodic orbits. The important role of initial conditions that justify the infinite number of periodic orbits exhibited by these models, is stressed. The possibility of unsuspected bistable regimes under specific configurations of parameters is also emphasized.
Linear programming models for cost reimbursement.
Diehr, G; Tamura, H
1989-01-01
Tamura, Lauer, and Sanborn (1985) reported a multiple regression approach to the problem of determining a cost reimbursement (rate-setting) formula for facilities providing long-term care (nursing homes). In this article we propose an alternative approach to this problem, using an absolute-error criterion instead of the least-squares criterion used in regression, with a variety of side constraints incorporated in the derivation of the formula. The mathematical tool for implementation of this approach is linear programming (LP). The article begins with a discussion of the desirable characteristics of a rate-setting formula. The development of a formula with these properties can be easily achieved, in terms of modeling as well as computation, using LP. Specifically, LP provides an efficient computational algorithm to minimize absolute error deviation, thus protecting rates from the effects of unusual observations in the data base. LP also offers modeling flexibility to impose a variety of policy controls. These features are not readily available if a least-squares criterion is used. Examples based on actual data are used to illustrate alternative LP models for rate setting. PMID:2759871
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.
Parallel Dynamics of Continuous Hopfield Model Revisited
NASA Astrophysics Data System (ADS)
Mimura, Kazushi
2009-03-01
We have applied the generating functional analysis (GFA) to the continuous Hopfield model. We have also confirmed that the GFA predictions in some typical cases exhibit good consistency with computer simulation results. When a retarded self-interaction term is omitted, the GFA result becomes identical to that obtained using the statistical neurodynamics as well as the case of the sequential binary Hopfield model.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulic head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.
Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots
ERIC Educational Resources Information Center
Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.
2013-01-01
Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…
From linear to generalized linear mixed models: A case study in repeated measures
USDA-ARS?s Scientific Manuscript database
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots
ERIC Educational Resources Information Center
Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.
2013-01-01
Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…
Continuous Certification Within Residency: An Educational Model.
Rachlin, Susan; Schonberger, Alison; Nocera, Nicole; Acharya, Jay; Shah, Nidhi; Henkel, Jacqueline
2015-10-01
Given that maintaining compliance with Maintenance of Certification is necessary for maintaining licensure to practice as a radiologist and provide quality patient care, it is important for radiology residents to practice fulfilling each part of the program during their training not only to prepare for success after graduation but also to adequately learn best practices from the beginning of their professional careers. This article discusses ways to implement continuous certification (called Continuous Residency Certification) as an educational model within the residency training program.
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction
Linear theory for filtering nonlinear multiscale systems with model error
Berry, Tyrus; Harlim, John
2014-01-01
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure
NASA Technical Reports Server (NTRS)
Yu, Xiaolong; Lewis, Edwin R.
1989-01-01
It is shown that noise can be an important element in the translation of neuronal generator potentials (summed inputs) to neuronal spike trains (outputs), creating or expanding a range of amplitudes over which the spike rate is proportional to the generator potential amplitude. Noise converts the basically nonlinear operation of a spike initiator into a nearly linear modulation process. This linearization effect of noise is examined in a simple intuitive model of a static threshold and in a more realistic computer simulation of spike initiator based on the Hodgkin-Huxley (HH) model. The results are qualitatively similar; in each case larger noise amplitude results in a larger range of nearly linear modulation. The computer simulation of the HH model with noise shows linear and nonlinear features that were earlier observed in spike data obtained from the VIIIth nerve of the bullfrog. This suggests that these features can be explained in terms of spike initiator properties, and it also suggests that the HH model may be useful for representing basic spike initiator properties in vertebrates.
NASA Technical Reports Server (NTRS)
Yu, Xiaolong; Lewis, Edwin R.
1989-01-01
It is shown that noise can be an important element in the translation of neuronal generator potentials (summed inputs) to neuronal spike trains (outputs), creating or expanding a range of amplitudes over which the spike rate is proportional to the generator potential amplitude. Noise converts the basically nonlinear operation of a spike initiator into a nearly linear modulation process. This linearization effect of noise is examined in a simple intuitive model of a static threshold and in a more realistic computer simulation of spike initiator based on the Hodgkin-Huxley (HH) model. The results are qualitatively similar; in each case larger noise amplitude results in a larger range of nearly linear modulation. The computer simulation of the HH model with noise shows linear and nonlinear features that were earlier observed in spike data obtained from the VIIIth nerve of the bullfrog. This suggests that these features can be explained in terms of spike initiator properties, and it also suggests that the HH model may be useful for representing basic spike initiator properties in vertebrates.
Modelling hillslope evolution: linear and nonlinear transport relations
NASA Astrophysics Data System (ADS)
Martin, Yvonne
2000-08-01
Many recent models of landscape evolution have used a diffusion relation to simulate hillslope transport. In this study, a linear diffusion equation for slow, quasi-continuous mass movement (e.g., creep), which is based on a large data compilation, is adopted in the hillslope model. Transport relations for rapid, episodic mass movements are based on an extensive data set covering a 40-yr period from the Queen Charlotte Islands, British Columbia. A hyperbolic tangent relation, in which transport increases nonlinearly with gradient above some threshold gradient, provided the best fit to the data. Model runs were undertaken for typical hillslope profiles found in small drainage basins in the Queen Charlotte Islands. Results, based on linear diffusivity values defined in the present study, are compared to results based on diffusivities used in earlier studies. Linear diffusivities, adopted in several earlier studies, generally did not provide adequate approximations of hillslope evolution. The nonlinear transport relation was tested and found to provide acceptable simulations of hillslope evolution. Weathering is introduced into the final set of model runs. The incorporation of weathering into the model decreases the rate of hillslope change when theoretical rates of sediment transport exceed sediment supply. The incorporation of weathering into the model is essential to ensuring that transport rates at high gradients obtained in the model reasonably replicate conditions observed in real landscapes. An outline of landscape progression is proposed based on model results. Hillslope change initially occurs at a rapid rate following events that result in oversteepened gradients (e.g., tectonic forcing, glaciation, fluvial undercutting). Steep gradients are eventually eliminated and hillslope transport is reduced significantly.
The average rate of change for continuous time models.
Kelley, Ken
2009-05-01
The average rate of change (ARC) is a concept that has been misunderstood in the applied longitudinal data analysis literature, where the slope from the straight-line change model is often thought of as though it were the ARC. The present article clarifies the concept of ARC and shows unequivocally the mathematical definition and meaning of ARC when measurement is continuous across time. It is shown that the slope from the straight-line change model generally is not equal to the ARC. General equations are presented for two measures of discrepancy when the slope from the straight-line change model is used to estimate the ARC in the case of continuous time for any model linear in its parameters, and for three useful models nonlinear in their parameters.
Downward continuation methods for gravimetric geoid modelling
NASA Astrophysics Data System (ADS)
Huang, J.; Véronneau, M.
2003-04-01
The determination of a gravimetric geoid model based on the Stokes integral requires that gravity anomalies must be on the geoid and that the anomalous potential must be harmonic above the geoid. To fulfill these requirements, gravity observations (or gravity anomalies) collected on the surface of the Earth need to be reduced to the geoid by a) removing the masses above the geoid and compensating for this removal; and b) continuing the gravity anomalies downward to the geoid. A well-known case is the determination of the Helmert gravity anomalies on the geoid in terms of Helmert's 2nd condensation method (e.g. Martinec et al. 1993). There are two procedures to follow for the evaluation of the Helmert gravity anomalies on the geoid: a) the Helmert gravity anomalies are evaluated on the irregular Earth surface, then are continued downward to the geoid, i.e., masses above the geoid are removed and restored as a condensed layer on the geoid prior to the downward continuation; b) alternatively the refined Bouguer anomalies on the surface of the Earth are downward-continued to the geoid, then the Helmert gravity anomalies are evaluated on the geoid, i.e., the condensed masses are restored after the downward continuation. Both procedures are theoretically equivalent (Huang et al. 2002a). In theory, the inclusion of the downward continuation should improve the geoid regardless of the approach chosen. However, different authors arrive at contradictory conclusions pertaining to its applications (e.g. Pavlis 1998; Ardalan 2000; Jekili and Serpas 2002; Véronneau and Huang 2002). This raises an open question as to how researchers should evaluate the downward continuation in order to improve geoid modeling. In this research, the two procedures described above are used to determine the geoid in Canada. The downward continuations are evaluated using the Poisson and Moritz methods (e.g. Moritz 1980; Sideris 1988; Vanícek et al. 1996; Martinec 1996; Sjöberg 1998; Nahavandchi 2000
NASA Astrophysics Data System (ADS)
Guo, Ying; Lv, Geli; Zeng, Guihua
2015-11-01
We show that the tolerable excess noise can be dynamically balanced in source preparation while inserting a tunable linear optics cloning machine (LOCM) for balancing the secret key rate and the maximal transmission distance of continuous-variable quantum key distribution (CVQKD). The intensities of source noise are sensitive to the tunable LOCM and can be stabilized to the suitable values to eliminate the impact of channel noise and defeat the potential attacks even in the case of the degenerated linear optics amplifier (LOA). The LOCM-additional noise can be elegantly employed by the reference partner of reconciliation to regulate the secret key rate and the transmission distance. Simulation results show that there is a considerable improvement in the secret key rate of the LOCM-based CVQKD while providing a tunable LOCM for source preparation with the specified parameters in suitable ranges.
NASA Astrophysics Data System (ADS)
Yang, Fangli; Shi, Ronghua; Guo, Ying; Shi, JinJing; Zeng, Guihua
2015-08-01
An improved continuous-variable quantum key distribution (CVQKD) protocol is proposed to improve the performance of CVQKD system under the local oscillator intensity attack by using a suitable noiseless linear amplifier (NLA) at the destination. This method can enhance the efficiency of the CVQKD scheme in terms of the maximum transmission distance, no matter whether the direct or reverse reconciliation is used. Simulation results show that there is a considerable increase in the transmission distance for the NLA-based CVQKD by adjusting the values of the parameters.
Controlling Continuous-Variable Quantum Key Distribution with Tuned Linear Optics Cloning Machines
NASA Astrophysics Data System (ADS)
Guo, Ying; Qiu, Deli; Huang, Peng; Zeng, Guihua
2015-09-01
We show that the tolerable excess noise can be elegantly controlled while inserting a tunable linear optics cloning machine (LOCM) for continuous-variable key distribution (CVQKD). The LOCM-tuned noise can be stabilized to an optimal value by the reference partner of reconciliation to guarantee the high secret key rate. Simulation results show that there is a considerable improvement of the performance for the LOCM-based CVQKD protocol in terms of the secret rate while making a fine balance between the secret key rate and the transmission distance with the dynamically tuned parameters in suitable ranges.
A FORTRAN program for the analysis of linear continuous and sample-data systems
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1976-01-01
A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.
Continuously rotating chiral liquid crystal droplets in a linearly polarized laser trap.
Yang, Y; Brimicombe, P D; Roberts, N W; Dickinson, M R; Osipov, M; Gleeson, H F
2008-05-12
The transfer of optical angular momentum to birefringent particles via circularly polarized light is common. We report here on the unexpected, continuous rotation of chiral nematic liquid crystal droplets in a linearly polarized optical trap. The rotation is non-uniform, occurs over a timescale of seconds, and is observed only for very specific droplet sizes. Synchronized vertical motion of the droplet occurs during the rotation. The motion is the result of photo-induced molecular reorganization, providing a micron sized opto-mechanical transducer that twists and translates.
Linearized Functional Minimization for Inverse Modeling
Wohlberg, Brendt; Tartakovsky, Daniel M.; Dentz, Marco
2012-06-21
Heterogeneous aquifers typically consist of multiple lithofacies, whose spatial arrangement significantly affects flow and transport. The estimation of these lithofacies is complicated by the scarcity of data and by the lack of a clear correlation between identifiable geologic indicators and attributes. We introduce a new inverse-modeling approach to estimate both the spatial extent of hydrofacies and their properties from sparse measurements of hydraulic conductivity and hydraulic head. Our approach is to minimize a functional defined on the vectors of values of hydraulic conductivity and hydraulic head fields defined on regular grids at a user-determined resolution. This functional is constructed to (i) enforce the relationship between conductivity and heads provided by the groundwater flow equation, (ii) penalize deviations of the reconstructed fields from measurements where they are available, and (iii) penalize reconstructed fields that are not piece-wise smooth. We develop an iterative solver for this functional that exploits a local linearization of the mapping from conductivity to head. This approach provides a computationally efficient algorithm that rapidly converges to a solution. A series of numerical experiments demonstrates the robustness of our approach.
The effect of non-linear human visual system components on linear model observers
NASA Astrophysics Data System (ADS)
Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.
2004-05-01
Linear model observers have been used successfully to predict human performance in clinically relevant visual tasks for a variety of backgrounds. On the other hand, there has been another family of models used to predict human visual detection of signals superimposed on one of two identical backgrounds (masks). These masking models usually include a number of non-linear components in the channels that reflect properties of the firing of cells in the primary visual cortex (V1). The relationship between these two traditions of models has not been extensively investigated in the context of detection in noise. In this paper, we evaluated the effect of including some of these non-linear components into a linear channelized Hotelling observer (CHO), and the associated practical implications for medical image quality evaluation. In particular, we evaluate whether the rank order evaluation of two compression algorithms (JPEG vs. JPEG 2000) is changed by inclusion of the non-linear components. The results show: a) First that the simpler linear CHO model observer outperforms CHO model with the nonlinear components investigated. b) The rank order of model observer performance for the compression algorithms did not vary when the non-linear components were included. For the present task, the results suggest that the addition of the physiologically based channel non-linearities to a channelized Hotelling might add complexity to the model observers without great impact on medical image quality evaluation.
Continuous-time Q-learning for infinite-horizon discounted cost linear quadratic regulator problems.
Palanisamy, Muthukumar; Modares, Hamidreza; Lewis, Frank L; Aurangzeb, Muhammad
2015-02-01
This paper presents a method of Q-learning to solve the discounted linear quadratic regulator (LQR) problem for continuous-time (CT) continuous-state systems. Most available methods in the existing literature for CT systems to solve the LQR problem generally need partial or complete knowledge of the system dynamics. Q-learning is effective for unknown dynamical systems, but has generally been well understood only for discrete-time systems. The contribution of this paper is to present a Q-learning methodology for CT systems which solves the LQR problem without having any knowledge of the system dynamics. A natural and rigorous justified parameterization of the Q-function is given in terms of the state, the control input, and its derivatives. This parameterization allows the implementation of an online Q-learning algorithm for CT systems. The simulation results supporting the theoretical development are also presented.
On a q-extension of the linear harmonic oscillator with the continuous orthogonality property on ℝ
NASA Astrophysics Data System (ADS)
Alvarez-Nodarse, R.; Atakishiyeva, M. K.; Atakishiyev, N. M.
2005-11-01
We discuss a q-analogue of the linear harmonic oscillator in quantum mechanics based on a q-extension of the classical Hermite polynomials H n ( x) recently introduced by us in R. Alvarez-Nodarse et al.: Boletin de la Sociedad Matematica Mexicana (3) 8 (2002) 127. The wave functions in this q-model of the quantum harmonic oscillator possess the continuous orthogonality property on the whole real line ℝ with respect to a positive weight function. A detailed description of the corresponding q-system is carried out.
Linear dynamic models for classification of single-trial EEG.
Samdin, S Balqis; Ting, Chee-Ming; Salleh, Sh-Hussain; Ariff, A K; Mohd Noor, A B
2013-01-01
This paper investigates the use of linear dynamic models (LDMs) to improve classification of single-trial EEG signals. Existing dynamic classification of EEG uses discrete-state hidden Markov models (HMMs) based on piecewise-stationary assumption, which is inadequate for modeling the highly non-stationary dynamics underlying EEG. The continuous hidden states of LDMs could better describe this continuously changing characteristic of EEG, and thus improve the classification performance. We consider two examples of LDM: a simple local level model (LLM) and a time-varying autoregressive (TVAR) state-space model. AR parameters and band power are used as features. Parameter estimation of the LDMs is performed by using expectation-maximization (EM) algorithm. We also investigate different covariance modeling of Gaussian noises in LDMs for EEG classification. The experimental results on two-class motor-imagery classification show that both types of LDMs outperform the HMM baseline, with the best relative accuracy improvement of 14.8% by LLM with full covariance for Gaussian noises. It may due to that LDMs offer more flexibility in fitting the underlying dynamics of EEG.
Thriving in Partnership: Models for Continuing Education
ERIC Educational Resources Information Center
Moroney, Peter; Boeck, Deena
2012-01-01
This article, based on a presentation at the University Professional and Continuing Education Association Annual Conference, March 29, 2012, provides concepts, terminology, and financial models for establishing and maintaining successful institutional partnerships. The authors offer it as a contribution to developing a wider understanding of the…
Models for Continuing Nursing Education in Gerontology.
ERIC Educational Resources Information Center
Beckingham, Ann C.
1995-01-01
Gerontological faculty should extend teaching into the clinical field to provide continuing education for nurses. Two models are Train the Trainer and a workplace learning package consisting of videotape, self-study modules, and self-directed, problem-based group study. (SK)
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
One step linear reconstruction method for continuous wave diffuse optical tomography
NASA Astrophysics Data System (ADS)
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
Continuous utility factor in segregation models.
Roy, Parna; Sen, Parongama
2016-02-01
We consider the constrained Schelling model of social segregation in which the utility factor of agents strictly increases and nonlocal jumps of the agents are allowed. In the present study, the utility factor u is defined in a way such that it can take continuous values and depends on the tolerance threshold as well as the fraction of unlike neighbors. Two models are proposed: in model A the jump probability is determined by the sign of u only, which makes it equivalent to the discrete model. In model B the actual values of u are considered. Model A and model B are shown to differ drastically as far as segregation behavior and phase transitions are concerned. In model A, although segregation can be achieved, the cluster sizes are rather small. Also, a frozen state is obtained in which steady states comprise many unsatisfied agents. In model B, segregated states with much larger cluster sizes are obtained. The correlation function is calculated to show quantitatively that larger clusters occur in model B. Moreover for model B, no frozen states exist even for very low dilution and small tolerance parameter. This is in contrast to the unconstrained discrete model considered earlier where agents can move even when utility remains the same. In addition, we also consider a few other dynamical aspects which have not been studied in segregation models earlier.
Reduction techniques and model analysis for linear models
Amhemad, A.; Lucas, C.A.
1994-12-31
Techniques for reducing the complexity of linear programs are well known. By suitable analysis many model redundancies can be removed and inconsistencies detected before an attempt is made in optimising a linear programming model. In carrying out such analysis, a structured approach is presented whereby an efficient amount of bound analysis is carried out under a row ranking scheme. In detecting new lower bounds for variables, these can be included in a starting basis by making such columns free variables. Quite often introducing new upper bounds, results in the model being more difficult to solve. We include our investigations into a strategy for deciding which new upper bounds should be passed to the optimiser. Finally most model reduction is carried out on models created by a modeling language. To aid the teaching of modeling analysis we show how such a procedure can be embedded in a modeling language and how the analysis can be presented to the modeller. We also present discussions on how the solution to a preprocessed problem, is post processed to present the solution in terms of the original problem.
Stochastic string models with continuous semimartingales
NASA Astrophysics Data System (ADS)
Bueno-Guerrero, Alberto; Moreno, Manuel; Navas, Javier F.
2015-09-01
This paper reformulates the stochastic string model of Santa-Clara and Sornette using stochastic calculus with continuous semimartingales. We present some new results, such as: (a) the dynamics of the short-term interest rate, (b) the PDE that must be satisfied by the bond price, and (c) an analytic expression for the price of a European bond call option. Additionally, we clarify some important features of the stochastic string model and show its relevance to price derivatives and the equivalence with an infinite dimensional HJM model to price European options.
Approximately Integrable Linear Statistical Models in Non-Parametric Estimation
1990-08-01
OTIC I EL COPY Lfl 0n Cf) NAPPROXIMATELY INTEGRABLE LINEAR STATISTICAL MODELS IN NON- PARAMETRIC ESTIMATION by B. Ya. Levit University of Maryland...Integrable Linear Statistical Models in Non- Parametric Estimation B. Ya. Levit Sumnmary / The notion of approximately integrable linear statistical models...models related to the study of the "next" order optimality in non- parametric estimation . It appears consistent to keep the exposition at present at the
NASA Astrophysics Data System (ADS)
Batt, Gregory S.; Gibert, James M.; Daqaq, Mohammed
2015-08-01
In this paper, the free and forced vibration response of a linearized, distributed-parameter model of a viscoelastic rod with an applied tip-mass is investigated. A nonlinear model is developed from constitutive relations and is linearized about a static equilibrium position for analysis. A classical Maxwell-Weichert model, represented via a Prony series, is used to model the viscoelastic system. The exact solution to both the free and forced vibration problem is derived and used to study the behavior of an idealized packaging system containing Nova Chemicals' Arcel® foam. It is observed that, although three Prony series terms are deemed sufficient to fit the static test data, convergence of the dynamic response and study of the storage and loss modulii necessitate the use of additional Prony series terms. It is also shown that the model is able to predict the modal frequencies and the primary resonance response at low acceleration excitation, both with reasonable accuracy given the non-homogeneity and density variation observed in the specimens. Higher acceleration inputs result in softening nonlinear responses highlighting the need for a nonlinear elastic model that extends beyond the scope of this work. Solution analysis and experimental data indicate little material vibration energy dissipation close to the first modal frequency of the mass/rod system.
Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.
Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad
2016-02-01
In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.
Continuous-time discrete-space models for animal movement
Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.
2015-01-01
The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
Linear control theory for gene network modeling.
Shin, Yong-Jun; Bleris, Leonidas
2010-09-16
Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.
Odefy -- From discrete to continuous models
2010-01-01
Background Phenomenological information about regulatory interactions is frequently available and can be readily converted to Boolean models. Fully quantitative models, on the other hand, provide detailed insights into the precise dynamics of the underlying system. In order to connect discrete and continuous modeling approaches, methods for the conversion of Boolean systems into systems of ordinary differential equations have been developed recently. As biological interaction networks have steadily grown in size and complexity, a fully automated framework for the conversion process is desirable. Results We present Odefy, a MATLAB- and Octave-compatible toolbox for the automated transformation of Boolean models into systems of ordinary differential equations. Models can be created from sets of Boolean equations or graph representations of Boolean networks. Alternatively, the user can import Boolean models from the CellNetAnalyzer toolbox, GINSim and the PBN toolbox. The Boolean models are transformed to systems of ordinary differential equations by multivariate polynomial interpolation and optional application of sigmoidal Hill functions. Our toolbox contains basic simulation and visualization functionalities for both, the Boolean as well as the continuous models. For further analyses, models can be exported to SQUAD, GNA, MATLAB script files, the SB toolbox, SBML and R script files. Odefy contains a user-friendly graphical user interface for convenient access to the simulation and exporting functionalities. We illustrate the validity of our transformation approach as well as the usage and benefit of the Odefy toolbox for two biological systems: a mutual inhibitory switch known from stem cell differentiation and a regulatory network giving rise to a specific spatial expression pattern at the mid-hindbrain boundary. Conclusions Odefy provides an easy-to-use toolbox for the automatic conversion of Boolean models to systems of ordinary differential equations. It can be
Recent Updates to the GEOS-5 Linear Model
NASA Technical Reports Server (NTRS)
Holdaway, Dan; Kim, Jong G.; Errico, Ron; Gelaro, Ronald; Mahajan, Rahul
2014-01-01
Global Modeling and Assimilation Office (GMAO) is close to having a working 4DVAR system and has developed a linearized version of GEOS-5.This talk outlines a series of improvements made to the linearized dynamics, physics and trajectory.Of particular interest is the development of linearized cloud microphysics, which provides the framework for 'all-sky' data assimilation.
Tried and True: Springing into Linear Models
ERIC Educational Resources Information Center
Darling, Gerald
2012-01-01
In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…
Three-Dimensional Modeling in Linear Regression.
ERIC Educational Resources Information Center
Herman, James D.
Linear regression examines the relationship between one or more independent (predictor) variables and a dependent variable. By using a particular formula, regression determines the weights needed to minimize the error term for a given set of predictors. With one predictor variable, the relationship between the predictor and the dependent variable…
Tried and True: Springing into Linear Models
ERIC Educational Resources Information Center
Darling, Gerald
2012-01-01
In eighth grade, students usually learn about forces in science class and linear relationships in math class, crucial topics that form the foundation for further study in science and engineering. An activity that links these two fundamental concepts involves measuring the distance a spring stretches as a function of how much weight is suspended…
Valuation of financial models with non-linear state spaces
NASA Astrophysics Data System (ADS)
Webber, Nick
2001-02-01
A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.
Percolation model with continuously varying exponents
NASA Astrophysics Data System (ADS)
Andrade, R. F. S.; Herrmann, H. J.
2013-10-01
This work analyzes a percolation model on the diamond hierarchical lattice (DHL), where the percolation transition is retarded by the inclusion of a probability of erasing specific connected structures. It has been inspired by the recent interest on the existence of other universality classes of percolation models. The exact scale invariance and renormalization properties of DHL leads to recurrence maps, from which analytical expressions for the critical exponents and precise numerical results in the limit of very large lattices can be derived. The critical exponents ν and β of the investigated model vary continuously as the erasing probability changes. An adequate choice of the erasing probability leads to the result ν=∞, like in some phase transitions involving vortex formation. The percolation transition is continuous, with β>0, but β can be as small as desired. The modified percolation model turns out to be equivalent to the Q→1 limit of a Potts model with specific long range interactions on the same lattice.
Fault diagnosis based on continuous simulation models
NASA Technical Reports Server (NTRS)
Feyock, Stefan
1987-01-01
The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.
Event-Based Consensus for Linear Multiagent Systems Without Continuous Communication.
Xing, Lantao; Wen, Changyun; Guo, Fanghong; Liu, Zhitao; Su, Hongye
2016-10-04
In this paper, we propose a new distributed event-trigger consensus protocol for linear multiagent systems with external disturbances. Two consensus problems are considered: one is a leader-follower case and the other is a nonleader case. Different from the existing results, our proposed scheme enables each agent to decide when to transmit its state signals to its neighbors such that continuous communication between neighboring agents is avoided. Clearly, this can largely decrease the communication burden of the whole communication network. Besides, since the control signal for each agent is discontinuous because of the event-triggering mechanism, the existence of a solution for the closed-loop system in the classical sense may not be guaranteed. To solve this problem, we employ a nonsmooth analysis technique including differential inclusion and Filippov solution. Through nonsmooth Lyapunov analysis, it is shown that uniformly bounded consensus results are derived and the bound of the consensus error is adjustable by choosing suitable design parameters.
Cramér-Rao bound for time-continuous measurements in linear Gaussian quantum systems
NASA Astrophysics Data System (ADS)
Genoni, Marco G.
2017-01-01
We describe a compact and reliable method to calculate the Fisher information for the estimation of a dynamical parameter in a continuously measured linear Gaussian quantum system. Unlike previous methods in the literature, which involve the numerical integration of a stochastic master equation for the corresponding density operator in a Hilbert space of infinite dimension, the formulas here derived depend only on the evolution of first and second moments of the quantum states and thus can be easily evaluated without the need of any approximation. We also present some basic but physically meaningful examples where this result is exploited, calculating analytical and numerical bounds on the estimation of the squeezing parameter for a quantum parametric amplifier and of a constant force acting on a mechanical oscillator in a standard optomechanical scenario.
NASA Astrophysics Data System (ADS)
Zhou, Jun; Lu, Xinbiao; Qian, Huimin
2016-09-01
The paper reports interesting but unnoticed facts about irreducibility (resp., reducibility) of Flouqet factorisations and their harmonic implication in term of controllability in finite-dimensional linear continuous-time periodic (FDLCP) systems. Reducibility and irreducibility are attributed to matrix logarithm algorithms during computing Floquet factorisations in FDLCP systems, which are a pair of essential features but remain unnoticed in the Floquet theory so far. The study reveals that reducible Floquet factorisations may bring in harmonic waves variance into the Fourier analysis of FDLCP systems that in turn may alter our interpretation of controllability when the Floquet factors are used separately during controllability testing; namely, controllability interpretation discrepancy (or simply, controllability discrepancy) may occur and must be examined whenever reducible Floquet factorisations are involved. On the contrary, when irreducible Floquet factorisations are employed, controllability interpretation discrepancy can be avoided. Examples are included to illustrate such observations.
Linear and Nonlinear Models of Agenda Setting in Television.
ERIC Educational Resources Information Center
Brosius, Hans-Bernd; Kepplinger, Hans Mathias
1992-01-01
A content analysis of major German television news shows and 53 weekly surveys on 16 issues were used to compare linear and nonlinear models as ways to describe the relationship between media coverage and the public agenda. Results indicate that nonlinear models are in some cases superior to linear models in terms of explained variance. (34…
Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis
ERIC Educational Resources Information Center
Luo, Wen; Azen, Razia
2013-01-01
Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…
Mathematical Models of Continuous Flow Electrophoresis
NASA Technical Reports Server (NTRS)
Saville, D. A.; Snyder, R. S.
1985-01-01
Development of high resolution continuous flow electrophoresis devices ultimately requires comprehensive understanding of the ways various phenomena and processes facilitate or hinder separation. A comprehensive model of the actual three dimensional flow, temperature and electric fields was developed to provide guidance in the design of electrophoresis chambers for specific tasks and means of interpreting test data on a given chamber. Part of the process of model development includes experimental and theoretical studies of hydrodynamic stability. This is necessary to understand the origin of mixing flows observed with wide gap gravitational effects. To insure that the model accurately reflects the flow field and particle motion requires extensive experimental work. Another part of the investigation is concerned with the behavior of concentrated sample suspensions with regard to sample stream stability particle-particle interactions which might affect separation in an electric field, especially at high field strengths. Mathematical models will be developed and tested to establish the roles of the various interactions.
Liu, Jian; Miller, William H.
2008-08-01
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Linear motor drive system for continuous-path closed-loop position control of an object
Barkman, William E.
1980-01-01
A precision numerical controlled servo-positioning system is provided for continuous closed-loop position control of a machine slide or platform driven by a linear-induction motor. The system utilizes filtered velocity feedback to provide system stability required to operate with a system gain of 100 inches/minute/0.001 inch of following error. The filtered velocity feedback signal is derived from the position output signals of a laser interferometer utilized to monitor the movement of the slide. Air-bearing slides mounted to a stable support are utilized to minimize friction and small irregularities in the slideway which would tend to introduce positioning errors. A microprocessor is programmed to read command and feedback information and converts this information into the system following error signal. This error signal is summed with the negative filtered velocity feedback signal at the input of a servo amplifier whose output serves as the drive power signal to the linear motor position control coil.
Rajeswaran, Jeevanantham; Blackstone, Eugene H
2017-02-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time-varying coefficients.
A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation
Rajeswaran, Jeevanantham; Blackstone, Eugene H.
2014-01-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830
Vuori, Kaarina; Strandén, Ismo; Sevón-Aimonen, Marja-Liisa; Mäntysaari, Esa A
2006-01-01
A method based on Taylor series expansion for estimation of location parameters and variance components of non-linear mixed effects models was considered. An attractive property of the method is the opportunity for an easily implemented algorithm. Estimation of non-linear mixed effects models can be done by common methods for linear mixed effects models, and thus existing programs can be used after small modifications. The applicability of this algorithm in animal breeding was studied with simulation using a Gompertz function growth model in pigs. Two growth data sets were analyzed: a full set containing observations from the entire growing period, and a truncated time trajectory set containing animals slaughtered prematurely, which is common in pig breeding. The results from the 50 simulation replicates with full data set indicate that the linearization approach was capable of estimating the original parameters satisfactorily. However, estimation of the parameters related to adult weight becomes unstable in the case of a truncated data set.
A continuous growth model for plant tissue
NASA Astrophysics Data System (ADS)
Bozorg, Behruz; Krupinski, Pawel; Jönsson, Henrik
2016-12-01
Morphogenesis in plants and animals involves large irreversible deformations. In plants, the response of the cell wall material to internal and external forces is determined by its mechanical properties. An appropriate model for plant tissue growth must include key features such as anisotropic and heterogeneous elasticity and cell dependent evaluation of mechanical variables such as turgor pressure, stress and strain. In addition, a growth model needs to cope with cell divisions as a necessary part of the growth process. Here we develop such a growth model, which is capable of employing not only mechanical signals but also morphogen signals for regulating growth. The model is based on a continuous equation for updating the resting configuration of the tissue. Simultaneously, material properties can be updated at a different time scale. We test the stability of our model by measuring convergence of growth results for a tissue under the same mechanical and material conditions but with different spatial discretization. The model is able to maintain a strain field in the tissue during re-meshing, which is of particular importance for modeling cell division. We confirm the accuracy of our estimations in two and three-dimensional simulations, and show that residual stresses are less prominent if strain or stress is included as input signal to growth. The approach results in a model implementation that can be used to compare different growth hypotheses, while keeping residual stresses and other mechanical variables updated and available for feeding back to the growth and material properties.
NASA Astrophysics Data System (ADS)
Orsolini, Y.; Leovy, C. B.
1993-12-01
A quasi-geostrophic midlatitude beta-plane linear model is here used to study whether the decay with height and meridional circulations of near-steady jets in the tropospheric circulation of Jupiter arise as a means of stabilizing a deep zonal flow that extends into the upper troposphere. The model results obtained are analogous to the stabilizing effect of meridional shear on baroclinic instabilities. In the second part of this work, a quasi-linear model is used to investigate how an initially barotropically unstable flow develops a quasi-steady shear zone in the lower scale heights of the model domain, due to the action of the eddy fluxes.
Modeling of linear time-varying systems by linear time-invariant systems of lower order.
NASA Technical Reports Server (NTRS)
Nosrati, H.; Meadows, H. E.
1973-01-01
A method for modeling linear time-varying differential systems by linear time-invariant systems of lower order is proposed, extending the results obtained by Bierman (1972) by resolving such qualities as the model stability, various possible models of differing dimensions, and the uniqueness or nonuniqueness of the model coefficient matrix. In addition to the advantages cited by Heffes and Sarachik (1969) and Bierman, often by modeling a subsystem of a larger system it is possible to analyze the overall system behavior more easily, with resulting savings in computation time.
Estimation of the linear mixed integrated Ornstein-Uhlenbeck model.
Hughes, Rachael A; Kenward, Michael G; Sterne, Jonathan A C; Tilling, Kate
2017-05-24
The linear mixed model with an added integrated Ornstein-Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance).
Symbol recognition produced by points of tactile stimulation: the illusion of linear continuity.
Gonzales, G R
1996-11-01
To determine whether tactile receptive communication is possible through the use of a mechanical device that produces the phi phenomenon on the body surface. Twenty-six subjects (11 blind and 15 sighted participants) were tested with use of a tactile communication device (TCD) that produces an illusion of linear continuity forming numbers on the dorsal aspect of the wrist. Recognition of a number or number set was the goal. A TCD with protruding and vibrating solenoids produced sequentially delivered points of cutaneous stimulation along a pattern resembling numbers and created the illusion of dragging a vibrating stylet to form numbers, similar to what might be felt by testing for graphesthesia. Blind subjects recognized numbers with fewer trials than did sighted subjects, although all subjects were able to recognize all the numbers produced by the TCD. Subjects who had been blind since birth and had no prior tactile exposure to numbers were able to draw the numbers after experiencing them delivered by the TCD even though they did not recognize their meaning. The phi phenomenon is probably responsible for the illusion of continuous lines in the shape of numbers as produced by the TCD. This tactile illusion could potentially be used for more complex tactile communications such as letters and words.
Miniature amperometric self-powered continuous glucose sensor with linear response.
Liu, Zenghe; Cho, Brian; Ouyang, Tianmei; Feldman, Ben
2012-04-03
Continuous glucose measurement has improved the treatment of type 1 diabetes and is typically provided by externally powered transcutaneous amperometric sensors. Self-powered glucose sensors (SPGSs) could provide an improvement over these conventionally powered devices, especially for fully implanted long-term applications where implanted power sources are problematic. Toward this end, we describe a robust SPGS that may be built from four simple components: (1) a low-potential, wired glucose oxidase anode; (2) a Pt/C cathode; (3) an overlying glucose flux-limiting membrane; and (4) a resistor bridging the anode and cathode. In vitro evaluation showed that the sensor output is linear over physiologic glucose concentrations (2-30 mM), even at low O(2) concentrations. Output was independent of the connecting resistor values over the range from 0 to 10 MΩ. The output was also stable over 60 days of continuous in vitro operation at 37 °C in 30 mM glucose. A 5-day trial in a volunteer demonstrated that the performance of the device was virtually identical to that of a conventional amperometric sensor. Thus, this SPGS is an attractive alternative to conventionally powered devices, especially for fully implanted long-term applications.
An analytically linearized helicopter model with improved modeling accuracy
NASA Technical Reports Server (NTRS)
Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.
1991-01-01
An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.
Development of a Linear Stirling Model with Varying Heat Inputs
NASA Technical Reports Server (NTRS)
Regan, Timothy F.; Lewandowski, Edward J.
2007-01-01
The linear model of the Stirling system developed by NASA Glenn Research Center (GRC) has been extended to include a user-specified heat input. Previously developed linear models were limited to the Stirling convertor and electrical load. They represented the thermodynamic cycle with pressure factors that remained constant. The numerical values of the pressure factors were generated by linearizing GRC s non-linear System Dynamic Model (SDM) of the convertor at a chosen operating point. The pressure factors were fixed for that operating point, thus, the model lost accuracy if a transition to a different operating point were simulated. Although the previous linear model was used in developing controllers that manipulated current, voltage, and piston position, it could not be used in the development of control algorithms that regulated hot-end temperature. This basic model was extended to include the thermal dynamics associated with a hot-end temperature that varies over time in response to external changes as well as to changes in the Stirling cycle. The linear model described herein includes not only dynamics of the piston, displacer, gas, and electrical circuit, but also the transient effects of the heater head thermal inertia. The linear version algebraically couples two separate linear dynamic models, one model of the Stirling convertor and one model of the thermal system, through the pressure factors. The thermal system model includes heat flow of heat transfer fluid, insulation loss, and temperature drops from the heat source to the Stirling convertor expansion space. The linear model was compared to a nonlinear model, and performance was very similar. The resulting linear model can be implemented in a variety of computing environments, and is suitable for analysis with classical and state space controls analysis techniques.
Descriptive Linear modeling of steady-state visual evoked response
NASA Technical Reports Server (NTRS)
Levison, W. H.; Junker, A. M.; Kenner, K.
1986-01-01
A study is being conducted to explore use of the steady state visual-evoke electrocortical response as an indicator of cognitive task loading. Application of linear descriptive modeling to steady state Visual Evoked Response (VER) data is summarized. Two aspects of linear modeling are reviewed: (1) unwrapping the phase-shift portion of the frequency response, and (2) parsimonious characterization of task-loading effects in terms of changes in model parameters. Model-based phase unwrapping appears to be most reliable in applications, such as manual control, where theoretical models are available. Linear descriptive modeling of the VER has not yet been shown to provide consistent and readily interpretable results.
Heat treatment modelling using strongly continuous semigroups.
Malek, Alaeddin; Abbasi, Ghasem
2015-07-01
In this paper, mathematical simulation of bioheat transfer phenomenon within the living tissue is studied using the thermal wave model. Three different sources that have therapeutic applications in laser surgery, cornea laser heating and cancer hyperthermia are used. Spatial and transient heating source, on the skin surface and inside biological body, are considered by using step heating, sinusoidal and constant heating. Mathematical simulations describe a non-Fourier process. Exact solution for the corresponding non-Fourier bioheat transfer model that has time lag in its heat flux is proposed using strongly continuous semigroup theory in conjunction with variational methods. The abstract differential equation, infinitesimal generator and corresponding strongly continuous semigroup are proposed. It is proved that related semigroup is a contraction semigroup and is exponentially stable. Mathematical simulations are done for skin burning and thermal therapy in 10 different models and the related solutions are depicted. Unlike numerical solutions, which suffer from uncertain physical results, proposed analytical solutions do not have unwanted numerical oscillations. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wang, Tianyou
2008-01-01
Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…
Graphical Tools for Linear Structural Equation Modeling
2014-06-01
equally valuable in their bias-reduction potential (Pearl and Paz , 2010). This problem pertains to prediction tasks as well. A researcher wishing to predict...in the regression Y = αX+β1Z1+...+βnZn+n, or equivalently, when does βYX.Z = βYX.W? Here we adapt Theorem 3 in (Pearl and Paz , 2010) for linear SEMs...Identification of Causal Mediation,” (R-389). Pearl, J. and Paz , A. (2010). Confounding equivalence in causal equivalence. In Proceedings of the Twenty
Neural network models for Linear Programming
Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )
1989-01-01
The purpose of this paper is to present a neural network that solves the general Linear Programming (LP) problem. In the first part, we recall Hopfield and Tank's circuit for LP and show that although it converges to stable states, it does not, in general, yield admissible solutions. This is due to the penalization treatment of the constraints. In the second part, we propose an approach based on Lagragrange multipliers that converges to primal and dual admissible solutions. We also show that the duality gap (measuring the optimality) can be rendered, in principle, as small as needed. 11 refs.
Gradient-Stable Linear Time Steps for Phase Field Models
NASA Astrophysics Data System (ADS)
Vollmayr-Lee, Benjamin
2013-03-01
Phase field models, which are nonlinear partial-differential equations, are a widely used for modeling the dynamics and equilibrium properties of materials. Unfortunately, time marching the equations of motion by explicit methods is usually numerically unstable unless the size of the time step is kept below a lattice-dependent threshold. Consequently, the amount of numerical computation is determined by avoidance of the instability rather than by the natural time scale of the dynamics. This can be a severe overhead. In contrast, a gradient stable method ensures a decreasing free energy, consistent with the relaxational dynamics of the continuous time model. Eyre's theorem proved that gradient stable schemes are possible, and Eyre presented a framework for constructing gradient-stable, semi-implicit time steps for a given phase-field model. Here I present a new theorem that provides a broader class of gradient-stable steps, in particular ones in which the implicit part of the equation is linear. This enables use of fast Fourier transforms to solve for the updated field, providing a considerable advantage in speed and simplicity. Examples will be presented for the Allen-Cahn and Cahn-Hilliard equations, an Ehrlich-Schwoebel-type interface growth model, and block copolymers.
Applications of the Linear Logistic Test Model in Psychometric Research
ERIC Educational Resources Information Center
Kubinger, Klaus D.
2009-01-01
The linear logistic test model (LLTM) breaks down the item parameter of the Rasch model as a linear combination of some hypothesized elementary parameters. Although the original purpose of applying the LLTM was primarily to generate test items with specified item difficulty, there are still many other potential applications, which may be of use…
Applications of the Linear Logistic Test Model in Psychometric Research
ERIC Educational Resources Information Center
Kubinger, Klaus D.
2009-01-01
The linear logistic test model (LLTM) breaks down the item parameter of the Rasch model as a linear combination of some hypothesized elementary parameters. Although the original purpose of applying the LLTM was primarily to generate test items with specified item difficulty, there are still many other potential applications, which may be of use…
A Model for Quadratic Outliers in Linear Regression.
ERIC Educational Resources Information Center
Elashoff, Janet Dixon; Elashoff, Robert M.
This paper introduces a model for describing outliers (observations which are extreme in some sense or violate the apparent pattern of other observations) in linear regression which can be viewed as a mixture of a quadratic and a linear regression. The maximum likelihood estimators of the parameters in the model are derived and their asymptotic…
Modeling Non-Linear Material Properties in Composite Materials
2016-06-28
Technical Report ARWSB-TR-16013 MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS Michael F. Macri Andrew G...REPORT TYPE Technical 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE MODELING NON-LINEAR MATERIAL PROPERTIES IN COMPOSITE MATERIALS ...systems are increasingly incorporating composite materials into their design. Many of these systems subject the composites to environmental conditions
Modeling local item dependence with the hierarchical generalized linear model.
Jiao, Hong; Wang, Shudong; Kamata, Akihito
2005-01-01
Local item dependence (LID) can emerge when the test items are nested within common stimuli or item groups. This study proposes a three-level hierarchical generalized linear model (HGLM) to model LID when LID is due to such contextual effects. The proposed three-level HGLM was examined by analyzing simulated data sets and was compared with the Rasch-equivalent two-level HGLM that ignores such a nested structure of test items. The results demonstrated that the proposed model could capture LID and estimate its magnitude. Also, the two-level HGLM resulted in larger mean absolute differences between the true and the estimated item difficulties than those from the proposed three-level HGLM. Furthermore, it was demonstrated that the proposed three-level HGLM estimated the ability distribution variance unaffected by the LID magnitude, while the two-level HGLM with no LID consideration increasingly underestimated the ability variance as the LID magnitude increased.
Employment of CB models for non-linear dynamic analysis
NASA Technical Reports Server (NTRS)
Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.
1990-01-01
The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.
Employment of CB models for non-linear dynamic analysis
NASA Technical Reports Server (NTRS)
Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.
1990-01-01
The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.
1993-09-01
NAVAL POSTGRADUATE SCHOOL Monterey, California X A DTICSt FLCTT THESIS MINE AVOIDANCE AND LOCALIZATION FOR UNDERWATER VEHICLES USING CONTINUOUS...Claitficaion) MINE AVOIDANCE AND LOCALIZATION FOR UNDERWATER VEHICLES USING CONTINUOUS CURVATURE PATH GENERATION AND NON-LINEAR TRACKING CONTROL 12 PERSONAL...SUB-GROUP Mine avoidance and localization; Autonomous underwater vehicles (ALV); Autopilot and guidance of AUV II; Sliding mode control 19 ABSTRACT
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Linear No-Threshold Model VS. Radiation Hormesis
Doss, Mohan
2013-01-01
The atomic bomb survivor cancer mortality data have been used in the past to justify the use of the linear no-threshold (LNT) model for estimating the carcinogenic effects of low dose radiation. An analysis of the recently updated atomic bomb survivor cancer mortality dose-response data shows that the data no longer support the LNT model but are consistent with a radiation hormesis model when a correction is applied for a likely bias in the baseline cancer mortality rate. If the validity of the phenomenon of radiation hormesis is confirmed in prospective human pilot studies, and is applied to the wider population, it could result in a considerable reduction in cancers. The idea of using radiation hormesis to prevent cancers was proposed more than three decades ago, but was never investigated in humans to determine its validity because of the dominance of the LNT model and the consequent carcinogenic concerns regarding low dose radiation. Since cancer continues to be a major health problem and the age-adjusted cancer mortality rates have declined by only ∼10% in the past 45 years, it may be prudent to investigate radiation hormesis as an alternative approach to reduce cancers. Prompt action is urged. PMID:24298226
Non-linear transformer modeling and simulation
Archer, W.E.; Deveney, M.F.; Nagel, R.L.
1994-08-01
Transformers models for simulation with Pspice and Analogy`s Saber are being developed using experimental B-H Loop and network analyzer measurements. The models are evaluated for accuracy and convergence using several test circuits. Results are presented which demonstrate the effects on circuit performance from magnetic core losses eddy currents and mechanical stress on the magnetic cores.
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Temporal-mode continuous-variable cluster states using linear optics
Menicucci, Nicolas C.
2011-06-15
An extensible experimental design for optical continuous-variable cluster states of arbitrary size using four offline (vacuum) squeezers and six beam splitters is presented. This method has all the advantages of a temporal-mode encoding [Phys. Rev. Lett. 104, 250503 (2010)], including finite requirements for coherence and stability even as the computation length increases indefinitely, with none of the difficulty of inline squeezing. The extensibility stems from a construction based on Gaussian projected entangled pair states. The potential for use of this design within a fully fault-tolerant model is discussed.
Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics
Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E
2017-01-01
This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052
Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics.
Nguyen, Tht; Mouksassi, M-S; Holford, N; Al-Huniti, N; Freedman, I; Hooker, A C; John, J; Karlsson, M O; Mould, D R; Pérez Ruixo, J J; Plan, E L; Savic, R; van Hasselt, Jgc; Weber, B; Zhou, C; Comets, E; Mentré, F
2017-02-01
This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used.
A general non-linear multilevel structural equation mixture model
Kelava, Augustin; Brandt, Holger
2014-01-01
In the past 2 decades latent variable modeling has become a standard tool in the social sciences. In the same time period, traditional linear structural equation models have been extended to include non-linear interaction and quadratic effects (e.g., Klein and Moosbrugger, 2000), and multilevel modeling (Rabe-Hesketh et al., 2004). We present a general non-linear multilevel structural equation mixture model (GNM-SEMM) that combines recent semiparametric non-linear structural equation models (Kelava and Nagengast, 2012; Kelava et al., 2014) with multilevel structural equation mixture models (Muthén and Asparouhov, 2009) for clustered and non-normally distributed data. The proposed approach allows for semiparametric relationships at the within and at the between levels. We present examples from the educational science to illustrate different submodels from the general framework. PMID:25101022
Rabbit models for continuous curvilinear capsulorhexis instruction.
Ruggiero, Jason; Keller, Christopher; Porco, Travis; Naseri, Ayman; Sretavan, David W
2012-07-01
To develop a rabbit model for continuous curvilinear capsulorhexis (CCC) instruction. University of California San Francisco, San Francisco, California, USA. Experimental study. Isolated rabbit lenses were immersed in 2% to 8% paraformaldehyde (PFA) fixative from 15 minutes to 6 hours. Rabbit eyes were treated by substituting aqueous with 2% to 4% PFA for 30 minutes to 6 hours, followed by washes with a balanced salt solution. Treated lenses and eyes were held in purpose-designed holders using vacuum. A panel of 6 cataract surgeons with 5 to 15 years of experience performed CCC on treated lenses and eyes and responded to a questionnaire regarding the utility of these models for resident teaching using a 5-item Likert scale. The expert panel found that rabbit lenses treated with increasing amounts of fixative simulated CCC on human lens capsules from the third to the seventh decade of life. The panel also found fixative-treated rabbit eyes to simulate some of the experience of CCC within the human anterior chamber but noted a shallower anterior chamber depth, variation in pupil size, and corneal clouding under some treatment conditions. Experienced cataract surgeons who performed CCC on these rabbit models strongly agreed that isolated rabbit lenses treated with fixative provide a realistic simulation of CCC in human patients and that both models were useful tools for capsulorhexis instruction. Results indicate that rabbit lenses treated with 8% PFA for 15 minutes is a model with good fidelity for CCC training. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Neural network modelling of non-linear hydrological relationships
NASA Astrophysics Data System (ADS)
Abrahart, R. J.; See, L. M.
2007-09-01
Two recent studies have suggested that neural network modelling offers no worthwhile improvements in comparison to the application of weighted linear transfer functions for capturing the non-linear nature of hydrological relationships. The potential of an artificial neural network to perform simple non-linear hydrological transformations under controlled conditions is examined in this paper. Eight neural network models were developed: four full or partial emulations of a recognised non-linear hydrological rainfall-runoff model; four solutions developed on an identical set of inputs and a calculated runoff coefficient output. The use of different input combinations enabled the competencies of solutions developed on a reduced number of parameters to be assessed. The selected hydrological model had a limited number of inputs and contained no temporal component. The modelling process was based on a set of random inputs that had a uniform distribution and spanned a modest range of possibilities. The initial cloning operations permitted a direct comparison to be performed with the equation-based relationship. It also provided more general information about the power of a neural network to replicate mathematical equations and model modest non-linear relationships. The second group of experiments explored a different relationship that is of hydrological interest; the target surface contained a stronger set of non-linear properties and was more challenging. Linear modelling comparisons were performed against traditional least squares multiple linear regression solutions developed on identical datasets. The reported results demonstrate that neural networks are capable of modelling non-linear hydrological processes and are therefore appropriate tools for hydrological modelling.
ENSO Diversity in Climate Models: A Linear Inverse Modeling Approach
NASA Astrophysics Data System (ADS)
Capotondi, A.; Sardeshmukh, P. D.
2013-12-01
As emphasized in a large recent literature, ENSO events differ in the longitudinal location of the largest sea surface temperature (SST) anomalies along the equator. These differences in peak longitude are associated with different atmospheric teleconnections and global-scale impacts, whose large societal relevance makes it very important to understand the origin and predictability of the various ENSO 'flavors'. In this study we use Linear Inverse Modeling (LIM) to examine ENSO diversity in a 1000-year pre-industrial control integration of the National Center for Atmospheric Research (NCAR) Community Climate System Model version 4 (CCSM4). We choose a pre-industrial control integration for its multi-century duration, and also to examine ENSO diversity in the context of natural variability. The NCAR-CCSM4 has relatively realistic ENSO variability, and a rich spectrum of ENSO diversity, and is thus well suited for studying the origin of ENSO flavors. In particular, the relative frequency of events peaking in the eastern and central equatorial Pacific ('EP' versus 'CP') undergoes inter-decadal modulations in this 1000-yr run. By constructing separate LIMs for the EP and CP epochs, as well as for the entire simulation, we examine to what extent the dominance of a specific ENSO flavor can be attributed to changes in the system dynamics (i.e in the LIM's linear operator) or is merely due to noise. Results from this study provide insights into the predictability of different ENSO types, establish a baseline for assessing ENSO changes due to global warming, and help define new dynamically meaningful ENSO metrics for evaluating climate models.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the
Linear and Nonlinear Thinking: A Multidimensional Model and Measure
ERIC Educational Resources Information Center
Groves, Kevin S.; Vance, Charles M.
2015-01-01
Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…
Derivation and definition of a linear aircraft model
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1988-01-01
A linear aircraft model for a rigid aircraft of constant mass flying over a flat, nonrotating earth is derived and defined. The derivation makes no assumptions of reference trajectory or vehicle symmetry. The linear system equations are derived and evaluated along a general trajectory and include both aircraft dynamics and observation variables.
Linear and Nonlinear Thinking: A Multidimensional Model and Measure
ERIC Educational Resources Information Center
Groves, Kevin S.; Vance, Charles M.
2015-01-01
Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…
A linear algebra model for quasispecies
NASA Astrophysics Data System (ADS)
García-Pelayo, Ricardo
2002-06-01
In the present work we present a simple model of the population genetics of quasispecies. We show that the error catastrophe arises because in Biology the mutation rates are almost zero and the mutations themselves are almost neutral. We obtain and discuss previously known results from the point of view of this model. New results are: the fitness of a sequence in terms of its abundance in the quasispecies, a formula for the stable distribution of a quasispecies in which the fitness depends only on the Hamming distance to the master sequence, the time it takes the master sequence to generate a stable quasispecies (such as in the infection by a virus) and the fitness of quasispecies.
Modeling of linear viscoelastic space structures
NASA Astrophysics Data System (ADS)
McTavish, D. J.; Hughes, P. C.
1993-01-01
The GHM Method provides viscoelastic finite elements derived from the commonly used elastic finite elements. Moreover, these GHM elements are used directly and conveniently in second-order structural models just like their elastic counterparts. The forms of the GHM element matrices preserve the definiteness properties usually associated with finite element matrices (the mass matrix is positive definite, the stiffness matrix is nonnegative definite, and the damping matrix is positive semidefinite). In the Laplace domain, material properties are modeled phenomenologically as a sum of second-order rational functions dubbed 'minioscillator' terms. Developed originally as a tool for the analysis of damping in large flexible space structures, the GHM method is applicable to any structure which incorporates viscoelastic materials.
Bond models in linear and nonlinear optics
NASA Astrophysics Data System (ADS)
Aspnes, D. E.
2015-08-01
Bond models, also known as polarizable-point or mechanical models, have a long history in optics, starting with the Clausius-Mossotti relation but more accurately originating with Ewald's largely forgotten work in 1912. These models describe macroscopic phenomena such as dielectric functions and nonlinear-optical (NLO) susceptibilities in terms of the physics that takes place in real space, in real time, on the atomic scale. Their strengths lie in the insights that they provide and the questions that they raise, aspects that are often obscured by quantum-mechanical treatments. Statics versions were used extensively in the late 1960's and early 1970's to correlate NLO susceptibilities among bulk materials. Interest in NLO applications revived with the 2002 work of Powell et al., who showed that a fully anisotropic version reduced by more than a factor of 2 the relatively large number of parameters necessary to describe secondharmonic- generation (SHG) data for Si(111)/SiO2 interfaces. Attention now is focused on the exact physical meaning of these parameters, and to the extent that they represent actual physical quantities.
Modeling plasticity by non-continuous deformation
NASA Astrophysics Data System (ADS)
Ben-Shmuel, Yaron; Altus, Eli
2016-10-01
Plasticity and failure theories are still subjects of intense research. Engineering constitutive models on the macroscale which are based on micro characteristics are very much in need. This study is motivated by the observation that continuum assumptions in plasticity in which neighbour material elements are inseparable at all-time are physically impossible, since local detachments, slips and neighbour switching must operate, i.e. non-continuous deformation. Material microstructure is modelled herein by a set of point elements (particles) interacting with their neighbours. Each particle can detach from and/or attach with its neighbours during deformation. Simulations on two- dimensional configurations subjected to uniaxial compression cycle are conducted. Stochastic heterogeneity is controlled by a single "disorder" parameter. It was found that (a) macro response resembles typical elasto-plastic behaviour; (b) plastic energy is proportional to the number of detachments; (c) residual plastic strain is proportional to the number of attachments, and (d) volume is preserved, which is consistent with macro plastic deformation. Rigid body displacements of local groups of elements are also observed. Higher disorder decreases the macro elastic moduli and increases plastic energy. Evolution of anisotropic effects is obtained with no additional parameters.
Failure of Tube Models to Predict the Linear Rheology of Star/Linear Blends
NASA Astrophysics Data System (ADS)
Hall, Ryan; Desai, Priyanka; Kang, Beomgoo; Katzarova, Maria; Huang, Qifan; Lee, Sanghoon; Chang, Taihyun; Venerus, David; Mays, Jimmy; Schieber, Jay; Larson, Ronald
We compare predictions of two of the most advanced versions of the tube model, namely the Hierarchical model by Wang et al. (J. Rheol. 54:223, 2010) and the BOB (branch-on-branch) model by Das et al. (J. Rheol. 50:207-234, 2006), against linear viscoelastic data on blends of monodisperse star and monodisperse linear polybutadiene polymers. The star was carefully synthesized/characterized by temperature gradient interaction chromatography, and rheological data in the high frequency region were obtained through time-temperature superposition. We found massive failures of both the Hierarchical and BOB models to predict the terminal relaxation behavior of the star/linear blends, despite their success in predicting the rheology of the pure star and pure linear. This failure occurred regardless of the choices made concerning constraint release, such as assuming arm retraction in fat or skinny tubes, or allowing for disentanglement relaxation to cut off the constraint release Rouse process at long times. The failures call into question whether constraint release can be described as a combination of constraint release Rouse processes and dynamic tube dilation within a canonical tube model of entanglement interactions.
Measurement of cervical sensorimotor control: the reliability of a continuous linear movement test.
Michiels, Sarah; Hallemans, Ann; Van de Heyning, Paul; Truijen, Steven; Stassijns, Gaetane; Wuyts, Floris; De Hertogh, Willem
2014-10-01
Cervical sensorimotorcontrol (cSMC) is traditionally assessed by head repositioning accuracy (HRA) measurements. A disadvantage of the HRA measurements is their static character and lack of visual feedback. In 2008, Sjölander et al. developed a continuous linear movement test (CLMT). This CLMT uses several kinematic parameters, such as reduced range of motion (ROM), velocity and movement smoothness, to quantify altered sensorimotor functions. Investigate the inter and intra rater reliability of a CLMT. Reliability study. Fifty asymptomatic adults were recruited. Five outcome measures were obtained: the time (t) needed to perform one movement, variation in time (var-t), ROM, peak velocity (peak-v) and Jerk index (Cj). A 3D analysis of cervical movements during the CLMT was made using ZEBRIS™. MATLAB™ was used to process data provided by the ZEBRIS™ device. These data were used to calculate ICC or κw-values, depending on the normality of the distribution, using SPSS. The intra rater reliability shows slight to moderate agreement for t (ICC: 0.19-0.42 and κw: 0.42) and peak-v (κw: 0.27-0.47), moderate to substantial agreement for var-t (ICC: 0.54-0.73) and ROM (ICC: 0.43-0.65) and fair to substantial agreement for Cj (κw: 0.27-0.69). The inter rater reliability shows moderate to almost perfect agreement for t (ICC: 0.54-0.93), almost perfect agreement for var-t (κw: 0.81-0.96) and ROM (ICC: 0.86-0.95), slight to moderate agreement for peak-v (κw: -0.03-0.44) and slight to fair agreement for Cj (κw: 0.00-0.31). Time and ROM are presently the most reliable outcome measures. However, it must be noted that the discriminant validity of the time parameters needs further investigation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Model calibration in the continual reassessment method.
Lee, Shing M; Ying Kuen Cheung
2009-06-01
The continual reassessment method (CRM) is an adaptive model-based design used to estimate the maximum tolerated dose in dose finding clinical trials. A way to evaluate the sensitivity of a given CRM model including the functional form of the dose-toxicity curve, the prior distribution on the model parameter, and the initial guesses of toxicity probability at each dose is using indifference intervals. While the indifference interval technique provides a succinct summary of model sensitivity, there are infinitely many possible ways to specify the initial guesses of toxicity probability. In practice, these are generally specified by trial and error through extensive simulations. By using indifference intervals, the initial guesses used in the CRM can be selected by specifying a range of acceptable toxicity probabilities in addition to the target probability of toxicity. An algorithm is proposed for obtaining the indifference interval that maximizes the average percentage of correct selection across a set of scenarios of true probabilities of toxicity and providing a systematic approach for selecting initial guesses in a much less time-consuming manner than the trial-and-error method. The methods are compared in the context of two real CRM trials. For both trials, the initial guesses selected by the proposed algorithm had similar operating characteristics as measured by percentage of correct selection, average absolute difference between the true probability of the dose selected and the target probability of toxicity, percentage treated at each dose and overall percentage of toxicity compared to the initial guesses used during the conduct of the trials which were obtained by trial and error through a time-consuming calibration process. The average percentage of correct selection for the scenarios considered were 61.5 and 62.0% in the lymphoma trial, and 62.9 and 64.0% in the stroke trial for the trial-and-error method versus the proposed approach. We only present
An insight into linear quarter car model accuracy
NASA Astrophysics Data System (ADS)
Maher, Damien; Young, Paul
2011-03-01
The linear quarter car model is the most widely used suspension system model. A number of authors expressed doubts about the accuracy of the linear quarter car model in predicting the movement of a complex nonlinear suspension system. In this investigation, a quarter car rig, designed to mimic the popular MacPherson strut suspension system, is subject to narrowband excitation at a range of frequencies using a motor driven cam. Linear and nonlinear quarter car simulations of the rig are developed. Both isolated and operational testing techniques are used to characterise the individual suspension system components. Simulations carried out using the linear and nonlinear models are compared to measured data from the suspension test rig at selected excitation frequencies. Results show that the linear quarter car model provides a reasonable approximation of unsprung mass acceleration but significantly overpredicts sprung mass acceleration magnitude. The nonlinear simulation, featuring a trilinear shock absorber model and nonlinear tyre, produces results which are significantly more accurate than linear simulation results. The effect of tyre damping on the nonlinear model is also investigated for narrowband excitation. It is found to reduce the magnitude of unsprung mass acceleration peaks and contribute to an overall improvement in simulation accuracy.
The General Linear Model and Direct Standardization: A Comparison.
ERIC Educational Resources Information Center
Little, Roderick J. A.; Pullum, Thomas W.
1979-01-01
Two methods of analyzing nonorthogonal (uneven cell sizes) cross-classified data sets are compared. The methods are direct standardization and the general linear model. The authors illustrate when direct standardization may be a desirable method of analysis. (JKS)
Hierarchical Linear Modeling in Salary-Equity Studies.
ERIC Educational Resources Information Center
Loeb, Jane W.
2003-01-01
Provides information on how hierarchical linear modeling can be used as an alternative to multiple regression analysis for conducting salary-equity studies. Salary data are used to compare and contrast the two approaches. (EV)
Dilatonic non-linear sigma models and Ricci flow extensions
NASA Astrophysics Data System (ADS)
Carfora, M.; Marzuoli, A.
2016-09-01
We review our recent work describing, in terms of the Wasserstein geometry over the space of probability measures, the embedding of the Ricci flow in the renormalization group flow for dilatonic non-linear sigma models.
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2006-01-01
Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…
Linear and non-linear chemometric modeling of THM formation in Barcelona's water treatment plant.
Platikanov, Stefan; Martín, Jordi; Tauler, Romà
2012-08-15
The complex behavior observed for the dependence of trihalomethane formation on forty one water treatment plant (WTP) operational variables is investigated by means of linear and non-linear regression methods, including kernel-partial least squares (K-PLS), and support vector machine regression (SVR). Lower prediction errors of total trihalomethane concentrations (lower than 14% for external validation samples) were obtained when these two methods were applied in comparison to when linear regression methods were applied. A new visualization technique revealed the complex nonlinear relationships among the operational variables and displayed the existing correlations between input variables and the kernel matrix on one side and the support vectors on the other side. Whereas some water treatment plant variables like river water TOC and chloride concentrations, and breakpoint chlorination were not considered to be significant due to the multi-collinear effect in straight linear regression modeling methods, they were now confirmed to be significant using K-PLS and SVR non-linear modeling regression methods, proving the better performance of these methods for the prediction of complex formation of trihalomethanes in water disinfection plants. Copyright © 2012 Elsevier B.V. All rights reserved.
Model checking for linear temporal logic: An efficient implementation
NASA Technical Reports Server (NTRS)
Sherman, Rivi; Pnueli, Amir
1990-01-01
This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.
Modeling Compton Scattering in the Linear Regime
NASA Astrophysics Data System (ADS)
Kelmar, Rebeka
2016-09-01
Compton scattering is the collision of photons and electrons. This collision causes the photons to be scattered with increased energy and therefore can produce high-energy photons. These high-energy photons can be used in many other fields including phase contrast medical imaging and x-ray structure determination. Compton scattering is currently well understood for low-energy collisions; however, in order to accurately compute spectra of backscattered photons at higher energies relativistic considerations must be included in the calculations. The focus of this work is to adapt a current program for calculating Compton backscattered radiation spectra to improve its efficiency. This was done by first translating the program from Matlab to python. The next step was to implement a more efficient adaptive integration to replace the trapezoidal method. A new program was produced that operates at less than a half of the speed of the original. This is important because it allows for quicker analysis, and sets the stage for further optimization. The programs were developed using just one particle, while in reality there are thousands of particles involved in these collisions. This means that a more efficient program is essential to running these simulations. The development of this new and efficient program will lead to accurate modeling of Compton sources as well as their improved performance.
Error control of iterative linear solvers for integrated groundwater models.
Dixon, Matthew F; Bai, Zhaojun; Brush, Charles F; Chung, Francis I; Dogrul, Emin C; Kadir, Tariq N
2011-01-01
An open problem that arises when using modern iterative linear solvers, such as the preconditioned conjugate gradient method or Generalized Minimum RESidual (GMRES) method, is how to choose the residual tolerance in the linear solver to be consistent with the tolerance on the solution error. This problem is especially acute for integrated groundwater models, which are implicitly coupled to another model, such as surface water models, and resolve both multiple scales of flow and temporal interaction terms, giving rise to linear systems with variable scaling. This article uses the theory of "forward error bound estimation" to explain the correspondence between the residual error in the preconditioned linear system and the solution error. Using examples of linear systems from models developed by the US Geological Survey and the California State Department of Water Resources, we observe that this error bound guides the choice of a practical measure for controlling the error in linear systems. We implemented a preconditioned GMRES algorithm and benchmarked it against the Successive Over-Relaxation (SOR) method, the most widely known iterative solver for nonsymmetric coefficient matrices. With forward error control, GMRES can easily replace the SOR method in legacy groundwater modeling packages, resulting in the overall simulation speedups as large as 7.74×. This research is expected to broadly impact groundwater modelers through the demonstration of a practical and general approach for setting the residual tolerance in line with the solution error tolerance and presentation of GMRES performance benchmarking results.
Hierarchical Generalized Linear Models for the Analysis of Judge Ratings
ERIC Educational Resources Information Center
Muckle, Timothy J.; Karabatsos, George
2009-01-01
It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…
Assessing Developmental Hypotheses with Cross Classified Data: Log Linear Models.
ERIC Educational Resources Information Center
Lehrer, Richard
Log linear models are proposed for the analysis of structural relations among multidimensional developmental contingency tables. Model of quasi-independence are suggested for testing specific hypothesized patterns of development. Transitions in developmental categorizations are described by Markov models applied to successive contingency tables. A…
A hierarchical linear model for tree height prediction.
Vicente J. Monleon
2003-01-01
Measuring tree height is a time-consuming process. Often, tree diameter is measured and height is estimated from a published regression model. Trees used to develop these models are clustered into stands, but this structure is ignored and independence is assumed. In this study, hierarchical linear models that account explicitly for the clustered structure of the data...
The Use of the Linear Mixed Model in Human Genetics.
Dandine-Roulland, Claire; Perdry, Hervé
2015-01-01
We give a short but detailed review of the methods used to deal with linear mixed models (restricted likelihood, AIREML algorithm, best linear unbiased predictors, etc.), with a few original points. Then we describe three common applications of the linear mixed model in contemporary human genetics: association testing (pathways analysis or rare variants association tests), genomic heritability estimates, and correction for population stratification in genome-wide association studies. We also consider the performance of best linear unbiased predictors for prediction in this context, through a simulation study for rare variants in a short genomic region, and through a short theoretical development for genome-wide data. For each of these applications, we discuss the relevance and the impact of modeling genetic effects as random effects. © 2016 S. Karger AG, Basel.
Phase II monitoring of auto-correlated linear profiles using linear mixed model
NASA Astrophysics Data System (ADS)
Narvand, A.; Soleimani, P.; Raissi, Sadigh
2013-05-01
In many circumstances, the quality of a process or product is best characterized by a given mathematical function between a response variable and one or more explanatory variables that is typically referred to as profile. There are some investigations to monitor auto-correlated linear and nonlinear profiles in recent years. In the present paper, we use the linear mixed models to account autocorrelation within observations which is gathered on phase II of the monitoring process. We undertake that the structure of correlated linear profiles simultaneously has both random and fixed effects. The work enhanced a Hotelling's T 2 statistic, a multivariate exponential weighted moving average (MEWMA), and a multivariate cumulative sum (MCUSUM) control charts to monitor process. We also compared their performances, in terms of average run length criterion, and designated that the proposed control charts schemes could effectively act in detecting shifts in process parameters. Finally, the results are applied on a real case study in an agricultural field.
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
Generation of linear dynamic models from a digital nonlinear simulation
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.
1979-01-01
The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.
Complex dynamics in the Oregonator model with linear delayed feedback
NASA Astrophysics Data System (ADS)
Sriram, K.; Bernard, S.
2008-06-01
The Belousov-Zhabotinsky (BZ) reaction can display a rich dynamics when a delayed feedback is applied. We used the Oregonator model of the oscillating BZ reaction to explore the dynamics brought about by a linear delayed feedback. The time-delayed feedback can generate a succession of complex dynamics: period-doubling bifurcation route to chaos; amplitude death; fat, wrinkled, fractal, and broken tori; and mixed-mode oscillations. We observed that this dynamics arises due to a delay-driven transition, or toggling of the system between large and small amplitude oscillations, through a canard bifurcation. We used a combination of numerical bifurcation continuation techniques and other numerical methods to explore the dynamics in the strength of feedback-delay space. We observed that the period-doubling and quasiperiodic route to chaos span a low-dimensional subspace, perhaps due to the trapping of the trajectories in the small amplitude regime near the canard; and the trapped chaotic trajectories get ejected from the small amplitude regime due to a crowding effect to generate chaotic-excitable spikes. We also qualitatively explained the observed dynamics by projecting a three-dimensional phase portrait of the delayed dynamics on the two-dimensional nullclines. This is the first instance in which it is shown that the interaction of delay and canard can bring about complex dynamics.
NASA Technical Reports Server (NTRS)
Halyo, N.; Caglayan, A. K.
1976-01-01
This paper considers the control of a continuous linear plant disturbed by white plant noise when the control is constrained to be a piecewise constant function of time; i.e. a stochastic sampled-data system. The cost function is the integral of quadratic error terms in the state and control, thus penalizing errors at every instant of time while the plant noise disturbs the system continuously. The problem is solved by reducing the constrained continuous problem to an unconstrained discrete one. It is shown that the separation principle for estimation and control still holds for this problem when the plant disturbance and measurement noise are Gaussian.
NASA Astrophysics Data System (ADS)
Beardsell, Alec; Collier, William; Han, Tao
2016-09-01
There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.
Phylogenetic mixtures and linear invariants for equal input models.
Casanellas, Marta; Steel, Mike
2017-04-01
The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).
Computer modeling of batteries from non-linear circuit elements
NASA Technical Reports Server (NTRS)
Waaben, S.; Federico, J.; Moskowitz, I.
1983-01-01
A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.
Recognition of threshold dose model: Avoiding continued excessive regulation
Logan, S.E.
1999-09-01
The purpose of this work is to examine the relationships between radiation dose-response models and associated regulations. The objective of radiation protection regulations is to protect workers and the public from harm resulting from excessive exposure to radiation. The regulations generally stipulate various levels of radiation dose rate to individuals or limit concentrations of radionuclides in releases to water or the atmosphere. The cleanup standards applied in remedial action for contaminated sites limit the concentrations of radionuclides in soil, groundwater, or structures, for release of sites to other uses. The guiding philosophy is that less is better and none is better yet. This has culminated with the concept of as low as reasonably achievable (ALARA). In fact, all regulations currently in place are arbitrarily based on the linear no-threshold hypothesis (LNTH) dose-response relationship. This concept came into use several decades ago and simply assumes that the incidence of health effects observed at a high dose or high dose rate will decrease linearly with dose or dose rate all the way down to zero, with no threshold level. Subsequent data have accumulated and continue to accumulate, demonstrating that there is a threshold level for net damage and, further, that there is a net benefit (radiation hormesis) at levels below the threshold level. It is concluded that recognition of the validity of a threshold model can be done on the basis of presently known data and that changes in regulations should be started at this time to avoid further unnecessary losses due to continued excessive regulation. As results from new research come in, refinement of interim values proposed in revised regulations can be incorporated.
Confirming the Lanchestrian linear-logarithmic model of attrition
Hartley, D.S. III.
1990-12-01
This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and final force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.
Non-Linear Finite Element Modeling of THUNDER Piezoelectric Actuators
NASA Technical Reports Server (NTRS)
Taleghani, Barmac K.; Campbell, Joel F.
1999-01-01
A NASTRAN non-linear finite element model has been developed for predicting the dome heights of THUNDER (THin Layer UNimorph Ferroelectric DrivER) piezoelectric actuators. To analytically validate the finite element model, a comparison was made with a non-linear plate solution using Von Karmen's approximation. A 500 volt input was used to examine the actuator deformation. The NASTRAN finite element model was also compared with experimental results. Four groups of specimens were fabricated and tested. Four different input voltages, which included 120, 160, 200, and 240 Vp-p with a 0 volts offset, were used for this comparison.
Dynamic modeling of electrochemical systems using linear graph theory
NASA Astrophysics Data System (ADS)
Dao, Thanh-Son; McPhee, John
An electrochemical cell is a multidisciplinary system which involves complex chemical, electrical, and thermodynamical processes. The primary objective of this paper is to develop a linear graph-theoretical modeling for the dynamic description of electrochemical systems through the representation of the system topologies. After a brief introduction to the topic and a review of linear graphs, an approach to develop linear graphs for electrochemical systems using a circuitry representation is discussed, followed in turn by the use of the branch and chord transformation techniques to generate final dynamic equations governing the system. As an example, the application of linear graph theory to modeling a nickel metal hydride (NiMH) battery will be presented. Results show that not only the number of equations are reduced significantly, but also the linear graph model simulates faster compared to the original lumped parameter model. The approach presented in this paper can be extended to modeling complex systems such as an electric or hybrid electric vehicle where a battery pack is interconnected with other components in many different domains.
Optical linear algebra processors - Noise and error-source modeling
NASA Technical Reports Server (NTRS)
Casasent, D.; Ghosh, A.
1985-01-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Johnson-Neyman Type Technique in Hierarchical Linear Model.
ERIC Educational Resources Information Center
Miyazaki, Yasuo
One of the innovative approaches in the use of hierarchical linear models (HLM) is to use HLM for Slopes as Outcomes models. This implies that the researcher considers that the regression slopes vary from cluster to cluster randomly as well as systematically with certain covariates at the cluster level. Among the covariates, group indicator…
Application Scenarios for Nonstandard Log-Linear Models
ERIC Educational Resources Information Center
Mair, Patrick; von Eye, Alexander
2007-01-01
In this article, the authors have 2 aims. First, hierarchical, nonhierarchical, and nonstandard log-linear models are defined. Second, application scenarios are presented for nonhierarchical and nonstandard models, with illustrations of where these scenarios can occur. Parameters can be interpreted in regard to their formal meaning and in regard…
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Heuristic and Linear Models of Judgment: Matching Rules and Environments
ERIC Educational Resources Information Center
Hogarth, Robin M.; Karelaia, Natalia
2007-01-01
Much research has highlighted incoherent implications of judgmental heuristics, yet other findings have demonstrated high correspondence between predictions and outcomes. At the same time, judgment has been well modeled in the form of as if linear models. Accepting the probabilistic nature of the environment, the authors use statistical tools to…
Locally Dependent Linear Logistic Test Model with Person Covariates
ERIC Educational Resources Information Center
Ip, Edward H.; Smits, Dirk J. M.; De Boeck, Paul
2009-01-01
The article proposes a family of item-response models that allow the separate and independent specification of three orthogonal components: item attribute, person covariate, and local item dependence. Special interest lies in extending the linear logistic test model, which is commonly used to measure item attributes, to tests with embedded item…
Optical linear algebra processors - Noise and error-source modeling
NASA Technical Reports Server (NTRS)
Casasent, D.; Ghosh, A.
1985-01-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Optical linear algebra processors: noise and error-source modeling.
Casasent, D; Ghosh, A
1985-06-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Application Scenarios for Nonstandard Log-Linear Models
ERIC Educational Resources Information Center
Mair, Patrick; von Eye, Alexander
2007-01-01
In this article, the authors have 2 aims. First, hierarchical, nonhierarchical, and nonstandard log-linear models are defined. Second, application scenarios are presented for nonhierarchical and nonstandard models, with illustrations of where these scenarios can occur. Parameters can be interpreted in regard to their formal meaning and in regard…
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth
ERIC Educational Resources Information Center
Tarasenko, Larissa V.; Ougolnitsky, Guennady A.; Usov, Anatoly B.; Vaskov, Maksim A.; Kirik, Vladimir A.; Astoyanz, Margarita S.; Angel, Olga Y.
2016-01-01
A dynamic game theoretic model of concordance of interests in the process of social partnership in the system of continuing professional education is proposed. Non-cooperative, cooperative, and hierarchical setups are examined. Analytical solution for a linear state version of the model is provided. Nash equilibrium algorithms (for non-cooperative…
Deterministic Equivalent for a Continuous Linear-Convex Stochastic Control Problem.
1987-09-01
adapted process of bounded variation . The running cost is described by a function g(z, t) and the terminal cost by the function G(x). A constant c > 0...U(t),t < T is a right continuous process of bounded variation . We denote the set of all such processes by A. Let G(z) be a nonnegative continuously
Use of a linearization approximation facilitating stochastic model building.
Svensson, Elin M; Karlsson, Mats O
2014-04-01
The objective of this work was to facilitate the development of nonlinear mixed effects models by establishing a diagnostic method for evaluation of stochastic model components. The random effects investigated were between subject, between occasion and residual variability. The method was based on a first-order conditional estimates linear approximation and evaluated on three real datasets with previously developed population pharmacokinetic models. The results were assessed based on the agreement in difference in objective function value between a basic model and extended models for the standard nonlinear and linearized approach respectively. The linearization was found to accurately identify significant extensions of the model's stochastic components with notably decreased runtimes as compared to the standard nonlinear analysis. The observed gain in runtimes varied between four to more than 50-fold and the largest gains were seen for models with originally long runtimes. This method may be especially useful as a screening tool to detect correlations between random effects since it substantially quickens the estimation of large variance-covariance blocks. To expedite the application of this diagnostic tool, the linearization procedure has been automated and implemented in the software package PsN.
Generalized linear mixed models for meta-analysis.
Platt, R W; Leroux, B G; Breslow, N
1999-03-30
We examine two strategies for meta-analysis of a series of 2 x 2 tables with the odds ratio modelled as a linear combination of study level covariates and random effects representing between-study variation. Penalized quasi-likelihood (PQL), an approximate inference technique for generalized linear mixed models, and a linear model fitted by weighted least squares to the observed log-odds ratios are used to estimate regression coefficients and dispersion parameters. Simulation results demonstrate that both methods perform adequate approximate inference under many conditions, but that neither method works well in the presence of highly sparse data. Under certain conditions with small cell frequencies the PQL method provides better inference.
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
A position-aware linear solid constitutive model for peridynamics
Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.
2015-11-06
A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations on simple benchmark problems show a sharp reduction in error relative to the LPS model.
A position-aware linear solid constitutive model for peridynamics
Mitchell, John A.; Silling, Stewart A.; Littlewood, David J.
2015-11-06
A position-aware linear solid (PALS) peridynamic constitutive model is proposed for isotropic elastic solids. The PALS model addresses problems that arise, in ordinary peridynamic material models such as the linear peridynamic solid (LPS), due to incomplete neighborhoods near the surface of a body. We improved model behavior in the vicinity of free surfaces through the application of two influence functions that correspond, respectively, to the volumetric and deviatoric parts of the deformation. Furthermore, the model is position-aware in that the influence functions vary over the body and reflect the proximity of each material point to free surfaces. Demonstration calculations onmore » simple benchmark problems show a sharp reduction in error relative to the LPS model.« less
PID controller design for trailer suspension based on linear model
NASA Astrophysics Data System (ADS)
Kushairi, S.; Omar, A. R.; Schmidt, R.; Isa, A. A. Mat; Hudha, K.; Azizan, M. A.
2015-05-01
A quarter of an active trailer suspension system having the characteristics of a double wishbone type was modeled as a complex multi-body dynamic system in MSC.ADAMS. Due to the complexity of the model, a linearized version is considered in this paper. A model reduction technique is applied to the linear model, resulting in a reduced-order model. Based on this simplified model, a Proportional-Integral-Derivative (PID) controller was designed in MATLAB/Simulink environment; primarily to reduce excessive roll motions and thus improving the ride comfort. Simulation results show that the output signal closely imitates the input signal in multiple cases - demonstrating the effectiveness of the controller.
Functional linear models for association analysis of quantitative traits.
Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao
2013-11-01
Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study.
Functional Linear Models for Association Analysis of Quantitative Traits
Fan, Ruzong; Wang, Yifan; Mills, James L.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao
2014-01-01
Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. PMID:24130119
A non linear analytical model of switched reluctance machines
NASA Astrophysics Data System (ADS)
Sofiane, Y.; Tounzi, A.; Piriou, F.
2002-06-01
Nowadays, the switched reluctance machine are widely used. To determine their performances and to elaborate control strategy, we generally use the linear analytical model. Unhappily, this last is not very accurate. To yield accurate modelling results, we use then numerical models based on either 2D or 3D Finite Element Method. However, this approach is very expensive in terms of computation time and remains suitable to study the behaviour of eventually a whole device. However, it is not, a priori, adapted to elaborate control strategy for electrical machines. This paper deals with a non linear analytical model in terms of variable inductances. The theoretical development of the proposed model is introduced. Then, the model is applied to study the behaviour of a whole controlled switched reluctance machine. The parameters of the structure are identified from a 2D numerical model. They can also be determined from an experimental bench. Then, the results given by the proposed model are compared to those issue from the 2D-FEM approach and from the classical linear analytical model.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator.
Continuous-genotype models and assortative mating
Felsenstein, J.
1981-06-01
Feldman and Cavalli-Sforza have argued that the convergence properties of classical models of assortative mating are not known, and that these models involve arbitrary assumptions which assume rather than derive the achievement of equilibrium. A careful consideration of all models shows that the classical models are well defined and seem to achieve their equilibra. The model used by Feldman and Cavalli-Sforza involves an arbitrary assumption. Consideration of the models of Wright, Fisher, Bulmer, and Lande in the context of assortative mating or of selection versus mutation shows that these models are consistent with each other. The treatment of the balance between mutation and normalizing selection by Cavalli-Sforza and Feldman comes to conclusions sharply different from those of other authors, apparently as a result of this same arbitrary assumption.
Multikernel linear mixed models for complex phenotype prediction
Weissbrod, Omer; Geiger, Dan; Rosset, Saharon
2016-01-01
Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636
Piecewise linear and Boolean models of chemical reaction networks
Veliz-Cuba, Alan; Kumar, Ajit; Josić, Krešimir
2014-01-01
Models of biochemical networks are frequently complex and high-dimensional. Reduction methods that preserve important dynamical properties are therefore essential for their study. Interactions in biochemical networks are frequently modeled using Hill functions (xn/(Jn + xn)). Reduced ODEs and Boolean approximations of such model networks have been studied extensively when the exponent n is large. However, while the case of small constant J appears in practice, it is not well understood. We provide a mathematical analysis of this limit, and show that a reduction to a set of piecewise linear ODEs and Boolean networks can be mathematically justified. The piecewise linear systems have closed form solutions that closely track those of the fully nonlinear model. The simpler, Boolean network can be used to study the qualitative behavior of the original system. We justify the reduction using geometric singular perturbation theory and compact convergence, and illustrate the results in network models of a toggle switch and an oscillator. PMID:25412739
Non-linear partially massless symmetry in an SO(1,5) continuation of conformal gravity
NASA Astrophysics Data System (ADS)
Apolo, Luis; Hassan, S. F.
2017-05-01
We construct a non-linear theory of interacting spin-2 fields that is invariant under the partially massless (PM) symmetry to all orders. This theory is based on the SO(1, 5) group, in analogy with the SO(2, 4) formulation of conformal gravity, but has a quadratic spectrum free of ghost instabilities. The action contains a vector field associated with a local SO(2) symmetry which is manifest in the vielbein formulation of the theory. We show that, in a perturbative expansion, the SO(2) symmetry transmutes into the PM transformations of a massive spin-2 field. In this context, the vector field is crucial to circumvent earlier obstructions to an order-by-order construction of PM symmetry. Although the non-linear theory lacks enough first class constraints to remove all helicity-0 modes from the spectrum, the PM transformations survive to all orders. The absence of ghosts and strong coupling effects at the non-linear level are not addressed here.
Lee, Dong-Jin; Lee, Sun-Kyu
2015-01-15
This paper presents a design and control system for an XY stage driven by an ultrasonic linear motor. In this study, a hybrid bolt-clamped Langevin-type ultrasonic linear motor was manufactured and then operated at the resonance frequency of the third longitudinal and the sixth lateral modes. These two modes were matched through the preload adjustment and precisely tuned by the frequency matching method based on the impedance matching method with consideration of the different moving weights. The XY stage was evaluated in terms of position and circular motion. To achieve both fine and stable motion, the controller consisted of a nominal characteristics trajectory following (NCTF) control for continuous motion, dead zone compensation, and a switching controller based on the different NCTFs for the macro- and micro-dynamics regimes. The experimental results showed that the developed stage enables positioning and continuous motion with nanometer-level accuracy.
Lee, Dong-Jin; Lee, Sun-Kyu
2015-01-01
This paper presents a design and control system for an XY stage driven by an ultrasonic linear motor. In this study, a hybrid bolt-clamped Langevin-type ultrasonic linear motor was manufactured and then operated at the resonance frequency of the third longitudinal and the sixth lateral modes. These two modes were matched through the preload adjustment and precisely tuned by the frequency matching method based on the impedance matching method with consideration of the different moving weights. The XY stage was evaluated in terms of position and circular motion. To achieve both fine and stable motion, the controller consisted of a nominal characteristics trajectory following (NCTF) control for continuous motion, dead zone compensation, and a switching controller based on the different NCTFs for the macro- and micro-dynamics regimes. The experimental results showed that the developed stage enables positioning and continuous motion with nanometer-level accuracy.
NASA Astrophysics Data System (ADS)
Lee, Dong-Jin; Lee, Sun-Kyu
2015-01-01
This paper presents a design and control system for an XY stage driven by an ultrasonic linear motor. In this study, a hybrid bolt-clamped Langevin-type ultrasonic linear motor was manufactured and then operated at the resonance frequency of the third longitudinal and the sixth lateral modes. These two modes were matched through the preload adjustment and precisely tuned by the frequency matching method based on the impedance matching method with consideration of the different moving weights. The XY stage was evaluated in terms of position and circular motion. To achieve both fine and stable motion, the controller consisted of a nominal characteristics trajectory following (NCTF) control for continuous motion, dead zone compensation, and a switching controller based on the different NCTFs for the macro- and micro-dynamics regimes. The experimental results showed that the developed stage enables positioning and continuous motion with nanometer-level accuracy.
NASA Astrophysics Data System (ADS)
Costa, Oswaldo L. V.; Fragoso, Marcelo D.
2007-07-01
In this paper we devise a separation principle for the H2 optimal control problem of continuous-time Markov jump linear systems with partial observations and the Markov process taking values in an infinite countable set . We consider that only an output and the jump parameters are available to the controller. It is desired to design a dynamic Markov jump controller such that the closed loop system is stochastically stable and minimizes the H2-norm of the system. As in the case with no jumps, we show that an optimal controller can be obtained from two sets of infinite coupled algebraic Riccati equations, one associated with the optimal control problem when the state variable is available, and the other one associated with the optimal filtering problem. An important feature of our approach, not previously found in the literature, is to introduce an adjoint operator of the continuous-time Markov jump linear system to derive our results.
O'Neill, M.J.; McDanal, A.J.
1986-04-01
The primary objective of this program was to design, develop, and test low-cost, continuous ribbon silicon cells suitable for use in ENTECH's linear Fresnel lens photovoltaic concenrator module. The cells were made by Westinghouse using a dendritic web continuous ribbon process. This program represented the first attempt to adapt dendritic web cell fabrication technology to concentrator applications. ENTECH generated an optimized cell design, which included variable metallization matched to the radiant flux profile of the linear Fresnel lens concentrator. Westinghouse fabricated cells in several sequential production runs. The cells were tested by ENTECH under actual lens illumination to determine their performance parameters. The best cells made under this program achieved peak cell efficiencies of about 14%, compared to about 16% for production cells made by Applied Solar Energy Corporation, using float-zone-refined single-crystal silicon. With additional development, significant performance improvements should be achievable in future dendritic web concentrator cells.
Chen, Haixia; Zhang, Jing
2007-02-15
We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme loses the output of phase-conjugate clones and is regarded as irreversible quantum cloning.
Liang, X B; Si, J
2001-01-01
This paper investigates the existence, uniqueness, and global exponential stability (GES) of the equilibrium point for a large class of neural networks with globally Lipschitz continuous activations including the widely used sigmoidal activations and the piecewise linear activations. The provided sufficient condition for GES is mild and some conditions easily examined in practice are also presented. The GES of neural networks in the case of locally Lipschitz continuous activations is also obtained under an appropriate condition. The analysis results given in the paper extend substantially the existing relevant stability results in the literature, and therefore expand significantly the application range of neural networks in solving optimization problems. As a demonstration, we apply the obtained analysis results to the design of a recurrent neural network (RNN) for solving the linear variational inequality problem (VIP) defined on any nonempty and closed box set, which includes the box constrained quadratic programming and the linear complementarity problem as the special cases. It can be inferred that the linear VIP has a unique solution for the class of Lyapunov diagonally stable matrices, and that the synthesized RNN is globally exponentially convergent to the unique solution. Some illustrative simulation examples are also given.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
Fasmer, Ole Bernt; Mjeldheim, Kristin; Førland, Wenche; Hansen, Anita L; Syrstad, Vigdis Elin Giæver; Oedegaard, Ketil J; Berle, Jan Øystein
2016-08-11
Attention Deficit Hyperactivity Disorder (ADHD) is a heterogeneous disorder. Therefore it is important to look for factors that can contribute to better diagnosis and classification of these patients. The aims of the study were to characterize adult psychiatric out-patients with a mixture of mood, anxiety and attentional problems using an objective neuropsychological test of attention combined with an assessment of mood instability. Newly referred patients (n = 99; aged 18-65 years) requiring diagnostic evaluation of ADHD, mood or anxiety disorders were recruited, and were given a comprehensive diagnostic evaluation including the self-report form of the cyclothymic temperament scale and Conner's Continuous Performance Test II (CPT-II). In addition to the traditional measures from this test we have extracted raw data and analysed time series using linear and non-linear mathematical methods. Fifty patients fulfilled criteria for ADHD, while 49 did not, and were given other psychiatric diagnoses (clinical controls). When compared to the clinical controls the ADHD patients had more omission and commission errors, and higher reaction time variability. Analyses of response times showed higher values for skewness in the ADHD patients, and lower values for sample entropy and symbolic dynamics. Among the ADHD patients 59 % fulfilled criteria for a cyclothymic temperament, and this group had higher reaction time variability and lower scores on complexity than the group without this temperament. The CPT-II is a useful instrument in the assessment of ADHD in adult patients. Additional information from this test was obtained by analyzing response times using linear and non-linear methods, and this showed that ADHD patients with a cyclothymic temperament were different from those without this temperament.
Phase Structure of the Non-Linear σ-MODEL with Oscillator Representation Method
NASA Astrophysics Data System (ADS)
Mishchenko, Yuriy; Ji, Chueng-R.
2004-03-01
Non-Linear σ-model plays an important role in many areas of theoretical physics. Been initially uintended as a simple model for chiral symmetry breaking, this model exhibits such nontrivial effects as spontaneous symmetry breaking, asymptotic freedom and sometimes is considered as an effective field theory for QCD. Besides, non-linear σ-model can be related to the strong-coupling limit of O(N) ϕ4-theory, continuous limit of N-dim. system of quantum spins, fermion gas and many others and takes important place in undertanding of how symmetries are realized in quantum field theories. Because of this variety of connections, theoretical study of the critical properties of σ-model is interesting and important. Oscillator representation method is a theoretical tool for studying the phase structure of simple QFT models. It is formulated in the framework of the canonical quantization and is based on the view of the unitary non-equivalent representations as possible phases of a QFT model. Successfull application of the ORM to ϕ4 and ϕ6 theories in 1+1 and 2+1 dimensions motivates its study in more complicated models such as non-linear σ-model. In our talk we introduce ORM, establish its connections with variational approach in QFT. We then present results of ORM in non-linear σ-model and try to interprete them from the variational point of view. Finally, we point out possible directions for further research in this area.
Johnson-Neyman Type Technique in Hierarchical Linear Models
ERIC Educational Resources Information Center
Miyazaki, Yasuo; Maier, Kimberly S.
2005-01-01
In hierarchical linear models we often find that group indicator variables at the cluster level are significant predictors for the regression slopes. When this is the case, the average relationship between the outcome and a key independent variable are different from group to group. In these settings, a question such as "what range of the…
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Use of Linear Models for Thermal Processing Acidified Foods
USDA-ARS?s Scientific Manuscript database
Acidified vegetable products with a pH above 3.3 must be pasteurized to assure the destruction of acid resistant pathogenic bacteria. The times and temperatures needed to assure a five log reduction by pasteurization have previously been determined using a non-linear (Weibull) model. Recently, the F...
Mathematical modelling and linear stability analysis of laser fusion cutting
Hermanns, Torsten; Schulz, Wolfgang; Vossen, Georg; Thombansen, Ulrich
2016-06-08
A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.
Non-linear duality invariant partially massless models?
Cherney, D.; Deser, S.; Waldron, A.; ...
2015-12-15
We present manifestly duality invariant, non-linear, equations of motion for maximal depth, partially massless higher spins. These are based on a first order, Maxwell-like formulation of the known partially massless systems. Lastly, our models mimic Dirac–Born–Infeld theory but it is unclear whether they are Lagrangian.
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION
We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...
Asymptotic behavior of coupled linear systems modeling suspension bridges
NASA Astrophysics Data System (ADS)
Dell'Oro, Filippo; Giorgi, Claudio; Pata, Vittorino
2015-06-01
We consider the coupled linear system describing the vibrations of a string-beam system related to the well-known Lazer-McKenna suspension bridge model. For ɛ > 0 and k > 0, the decay properties of the solution semigroup are discussed in dependence of the nonnegative parameters γ and h, which are responsible for the damping effects.
A Methodology and Linear Model for System Planning and Evaluation.
ERIC Educational Resources Information Center
Meyer, Richard W.
1982-01-01
The two-phase effort at Clemson University to design a comprehensive library automation program is reported. Phase one was based on a version of IBM's business system planning methodology, and the second was based on a linear model designed to compare existing program systems to the phase one design. (MLW)
Identifiability Results for Several Classes of Linear Compartment Models.
Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa
2015-08-01
Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.
Intuitionistic Fuzzy Weighted Linear Regression Model with Fuzzy Entropy under Linear Restrictions.
Kumar, Gaurav; Bajaj, Rakesh Kumar
2014-01-01
In fuzzy set theory, it is well known that a triangular fuzzy number can be uniquely determined through its position and entropies. In the present communication, we extend this concept on triangular intuitionistic fuzzy number for its one-to-one correspondence with its position and entropies. Using the concept of fuzzy entropy the estimators of the intuitionistic fuzzy regression coefficients have been estimated in the unrestricted regression model. An intuitionistic fuzzy weighted linear regression (IFWLR) model with some restrictions in the form of prior information has been considered. Further, the estimators of regression coefficients have been obtained with the help of fuzzy entropy for the restricted/unrestricted IFWLR model by assigning some weights in the distance function.
Statistical Modeling for Continuous Speech Recognition
1988-02-01
as battle management, has focused on the development of accurate mathematical models for the different phonemes that occur in English . The research...coarticulation model proposed above. 8 Report No. 6725 BBN Laboratories Incorporated 2.2.1 E-set Problem The "E-set" is the set of nine letters of the English ...described above. The high-perple\\ivt granimar was based on the 1000-word Resource Management task. Startinz , ith a lo\\\\- perplexity Sentence Pattern Gramar
Linear Sigma Model Toolshed for D-brane Physics
Hellerman, Simeon
2001-08-23
Building on earlier work, we construct linear sigma models for strings on curved spaces in the presence of branes. Our models include an extremely general class of brane-worldvolume gauge field configurations. We explain in an accessible manner the mathematical ideas which suggest appropriate worldsheet interactions for generating a given open string background. This construction provides an explanation for the appearance of the derived category in D-brane physic complementary to that of recent work of Douglas.
Linear Time Invariant Models for Integrated Flight and Rotor Control
NASA Astrophysics Data System (ADS)
Olcer, Fahri Ersel
2011-12-01
Recent developments on individual blade control (IBC) and physics based reduced order models of various on-blade control (OBC) actuation concepts are opening up opportunities to explore innovative rotor control strategies for improved rotor aerodynamic performance, reduced vibration and BVI noise, and improved rotor stability, etc. Further, recent developments in computationally efficient algorithms for the extraction of Linear Time Invariant (LTI) models are providing a convenient framework for exploring integrated flight and rotor control, while accounting for the important couplings that exist between body and low frequency rotor response and high frequency rotor response. Formulation of linear time invariant (LTI) models of a nonlinear system about a periodic equilibrium using the harmonic domain representation of LTI model states has been studied in the literature. This thesis presents an alternative method and a computationally efficient scheme for implementation of the developed method for extraction of linear time invariant (LTI) models from a helicopter nonlinear model in forward flight. The fidelity of the extracted LTI models is evaluated using response comparisons between the extracted LTI models and the nonlinear model in both time and frequency domains. Moreover, the fidelity of stability properties is studied through the eigenvalue and eigenvector comparisons between LTI and LTP models by making use of the Floquet Transition Matrix. For time domain evaluations, individual blade control (IBC) and On-Blade Control (OBC) inputs that have been tried in the literature for vibration and noise control studies are used. For frequency domain evaluations, frequency sweep inputs are used to obtain frequency responses of fixed system hub loads to a single blade IBC input. The evaluation results demonstrate the fidelity of the extracted LTI models, and thus, establish the validity of the LTI model extraction process for use in integrated flight and rotor control
A Derivation of Linearized Griffith Energies from Nonlinear Models
NASA Astrophysics Data System (ADS)
Friedrich, Manuel
2017-07-01
We derive Griffith functionals in the framework of linearized elasticity from nonlinear and frame indifferent energies in a brittle fracture via {Γ}-convergence. The convergence is given in terms of rescaled displacement fields measuring the distance of deformations from piecewise rigid motions. The configurations of the limiting model consist of partitions of the material, corresponding piecewise rigid deformations and displacement fields which are defined separately on each component of the cracked body. Apart from the linearized Griffith energy the limiting functional also comprises the segmentation energy, which is necessary to disconnect the parts of the specimen.
Linearized flexibility models in multibody dynamics and control
NASA Technical Reports Server (NTRS)
Cimino, William W.
1989-01-01
Simulation of structural response of multi-flexible-body systems by linearized flexible motion combined with nonlinear rigid motion is discussed. Advantages and applicability of such an approach for accurate simulation with greatly reduced computational costs and turnaround times are described, restricting attention to the control design environment. Requirements for updating the linearized flexibility model to track large angular motions are discussed. Validation of such an approach by comparison with other existing codes is included. Application to a flexible robot manipulator system is described.
Linear modeling of steady-state behavioral dynamics.
Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert
2002-01-01
The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
Identification of parameters of discrete-continuous models
Cekus, Dawid Warys, Pawel
2015-03-10
In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.
Validation of a non-linear model of health.
Topolski, Stefan; Sturmberg, Joachim
2014-12-01
The purpose of this study was to evaluate the veracity of a theoretically derived model of health that describes a non-linear trajectory of health from birth to death with available population data sets. The distribution of mortality by age is directly related to health at that age, thus health approximates 1/mortality. The inverse of available all-cause mortality data from various time periods and populations was used as proxy data to compare with the theoretically derived non-linear health model predictions, using both qualitative approaches and quantitative one-sample Kolmogorov-Smirnov analysis with Monte Carlo simulation. The mortality data's inverse resembles a log-normal distribution as predicted by the proposed health model. The curves have identical slopes from birth and follow a logarithmic decline from peak health in young adulthood. A majority of the sampled populations had a good to excellent quantitative fit to a log-normal distribution, supporting the underlying model assumptions. Post hoc manipulation showed the model predictions to be stable. This is a first theory of health to be validated by proxy data, namely the inverse of all-cause mortality. This non-linear model, derived from the notion of the interaction of physical, environmental, mental, emotional, social and sense-making domains of health, gives physicians a more rigorous basis to direct health care services and resources away from disease-focused elder care towards broad-based biopsychosocial interventions earlier in life. © 2014 John Wiley & Sons, Ltd.
Gene Golub; Kwok Ko
2009-03-30
The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.
Attracted to de Sitter: cosmology of the linear Horndeski models
Martín-Moruno, Prado; Nunes, Nelson J.; Lobo, Francisco S.N. E-mail: njnunes@fc.ul.pt
2015-05-01
We consider Horndeski cosmological models, with a minisuperspace Lagrangian linear in the field derivative, that are able to screen any vacuum energy and material content leading to a spatially flat de Sitter vacuum fixed by the theory itself. Furthermore, we investigate particular models with a cosmic evolution independent of the material content and use them to understand the general characteristics of this framework. We also consider more realistic models, which we denote the ''term-by-term'' and ''tripod'' models, focusing attention on cases in which the critical point is indeed an attractor solution and the cosmological history is of particular interest.
Can the Non-linear Ballooning Model describe ELMs?
NASA Astrophysics Data System (ADS)
Henneberg, S. A.; Cowley, S. C.; Wilson, H. R.
2015-11-01
The explosive, filamentary plasma eruptions described by the non-linear ideal MHD ballooning model is tested quantitatively against experimental observations of ELMs in MAST. The equations describing this model were derived by Wilson and Cowley for tokamak-like geometry which includes two differential equations: the linear ballooning equation which describes the spatial distribution along the field lines and the non-linear ballooning mode envelope equation, which is a two-dimensional, non-linear differential equation which can involve fractional temporal-derivatives, but is often second-order in time and space. To employ the second differential equation for a specific geometry one has to evaluate the coefficients of the equation which is non-trivial as it involves field line averaging of slowly converging functions. We have solved this system for MAST, superimposing the solutions of both differential equations and mapping them onto a MAST plasma. Comparisons with the evolution of ELM filaments in MAST will be reported in order to test the model. The support of the EPSRC for the FCDT (Grant EP/K504178/1), of Euratom research and training programme 2014-2018 (No 633053) and of the RCUK Energy Programme [grant number EP/I501045] is gratefully acknowledged.
A simplified approach to quasi-linear viscoelastic modeling.
Nekouzadeh, Ali; Pryse, Kenneth M; Elson, Elliot L; Genin, Guy M
2007-01-01
The fitting of quasi-linear viscoelastic (QLV) constitutive models to material data often involves somewhat cumbersome numerical convolution. A new approach to treating quasi-linearity in 1-D is described and applied to characterize the behavior of reconstituted collagen. This approach is based on a new principle for including nonlinearity and requires considerably less computation than other comparable models for both model calibration and response prediction, especially for smoothly applied stretching. Additionally, the approach allows relaxation to adapt with the strain history. The modeling approach is demonstrated through tests on pure reconstituted collagen. Sequences of "ramp-and-hold" stretching tests were applied to rectangular collagen specimens. The relaxation force data from the "hold" was used to calibrate a new "adaptive QLV model" and several models from literature, and the force data from the "ramp" was used to check the accuracy of model predictions. Additionally, the ability of the models to predict the force response on a reloading of the specimen was assessed. The "adaptive QLV model" based on this new approach predicts collagen behavior comparably to or better than existing models, with much less computation.
Dust grain coagulation modelling : From discrete to continuous
NASA Astrophysics Data System (ADS)
Paruta, P.; Hendrix, T.; Keppens, R.
2016-07-01
In molecular clouds, stars are formed from a mixture of gas, plasma and dust particles. The dynamics of this formation is still actively investigated and a study of dust coagulation can help to shed light on this process. Starting from a pre-existing discrete coagulation model, this work aims to mathematically explore its properties and its suitability for numerical validation. The crucial step is in our reinterpretation from its original discrete to a well-defined continuous form, which results in the well-known Smoluchowski coagulation equation. This opens up the possibility of exploiting previous results in order to prove the existence and uniqueness of a mass conserving solution for the evolution of dust grain size distribution. Ultimately, to allow for a more flexible numerical implementation, the problem is rewritten as a non-linear hyperbolic integro-differential equation and solved using a finite volume discretisation. It is demonstrated that there is an exact numerical agreement with the initial discrete model, with improved accuracy. This is of interest for further work on dynamically coupled gas with dust simulations.
Continuous-Discontinuous Model for Ductile Fracture
Seabra, Mariana R. R.; Cesar de Sa, Jose M. A.
2010-06-15
In this contribution, a continuum-dicontinuum model for ductile failure is presented. The degradation of material properties trough deformation is described by Continuum Damage Mechanics in a non-local integral formulation to avoid mesh dependence. In the final stage of failure, the damaged zone is replaced by a cohesive macro crack and subsequent traction-free macro crack for a more realistic representation of the phenomenon. The inclusion of the discontinuity surfaces is performed by the XFEM and Level Set Method and avoids the spurious damage growth typical of this class of models.
Current Density and Continuity in Discretized Models
ERIC Educational Resources Information Center
Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard
2010-01-01
Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…
Current Density and Continuity in Discretized Models
ERIC Educational Resources Information Center
Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard
2010-01-01
Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…
Skills Diagnosis Using IRT-Based Continuous Latent Trait Models
ERIC Educational Resources Information Center
Stout, William
2007-01-01
This article summarizes the continuous latent trait IRT approach to skills diagnosis as particularized by a representative variety of continuous latent trait models using item response functions (IRFs). First, several basic IRT-based continuous latent trait approaches are presented in some detail. Then a brief summary of estimation, model…
Attraction and Stability of Nonlinear Ode's using Continuous Piecewise Linear Approximations
NASA Astrophysics Data System (ADS)
Garcia, Andres; Agamennoni, Osvaldo
2010-04-01
In this paper, several results concerning attraction and asymptotic stability in the large of nonlinear ordinary differential equations are presented. The main result is very simple to apply yielding a sufficient condition under which the equilibrium point (assuming a unique equilibrium) is attractive and also provides a variety of options among them the classical linearization and other existing results are special cases of the this main theorem in this paper including and extension of the well known Markus-Yamabe conjecture. Several application examples are presented in order to analyze the advantages and drawbacks of the proposed result and to compare such results with successful existing techniques for analysis available in the literature nowadays.
Scalar mesons in three-flavor linear sigma models
Deirdre Black; Amir H. Fariborz; Sherif Moussa; Salah Nasri; Joseph Schrechter
2001-09-01
The three flavor linear sigma model is studied in order to understand the role of possible light scalar mesons in the pi-pi, pi-K and pi-eta elastic scattering channels. The K-matrix prescription is used to unitarize tree-level amplitudes and, with a sufficiently general model, we obtain reasonable ts to the experimental data. The effect of unitarization is very important and leads to the emergence of a nonet of light scalars, with masses below 1 GeV. We compare with a scattering treatment using a more general non-linear sigma model approach and also comment upon how our results t in with the scalar meson puzzle. The latter involves a preliminary investigation of possible mixing between scalar nonets.
Connecting Atomistic and Continuous Models of Elastodynamics
NASA Astrophysics Data System (ADS)
Braun, Julian
2017-06-01
We prove the long-time existence of solutions for the equations of atomistic elastodynamics on a bounded domain with time-dependent boundary values as well as their convergence to a solution of continuum nonlinear elastodynamics as the interatomic distances tend to zero. Here, the continuum energy density is given by the Cauchy-Born rule. The models considered allow for general finite range interactions. To control the stability of large deformations we also prove a new atomistic Gårding inequality.
Connecting Atomistic and Continuous Models of Elastodynamics
NASA Astrophysics Data System (ADS)
Braun, Julian
2017-02-01
We prove the long-time existence of solutions for the equations of atomistic elastodynamics on a bounded domain with time-dependent boundary values as well as their convergence to a solution of continuum nonlinear elastodynamics as the interatomic distances tend to zero. Here, the continuum energy density is given by the Cauchy-Born rule. The models considered allow for general finite range interactions. To control the stability of large deformations we also prove a new atomistic Gårding inequality.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models.
Technical note: A linear model for predicting δ13 Cprotein.
Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M
2015-08-01
Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2) = 0.86, P < 0.01), and experimentally generated error terms of ±1.9% for any predicted individual value of δ(13) Cprotein . This model was tested using isotopic data from Formative Period individuals from northern Chile's Atacama Desert. The model presented here appears to hold significant potential for the prediction of the carbon isotope signature of dietary protein using only such data as is routinely generated in the course of stable isotope analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.
A simplified approach to quasi-linear viscoelastic modeling
Nekouzadeh, Ali; Pryse, Kenneth M.; Elson, Elliot L.; Genin, Guy M.
2007-01-01
The fitting of quasi-linear viscoelastic (QLV) constitutive models to material data often involves somewhat cumbersome numerical convolution. A new approach to treating quasi-linearity in one dimension is described and applied to characterize the behavior of reconstituted collagen. This approach is based on a new principle for including nonlinearity and requires considerably less computation than other comparable models for both model calibration and response prediction, especially for smoothly applied stretching. Additionally, the approach allows relaxation to adapt with the strain history. The modeling approach is demonstrated through tests on pure reconstituted collagen. Sequences of “ramp-and-hold” stretching tests were applied to rectangular collagen specimens. The relaxation force data from the “hold” was used to calibrate a new “adaptive QLV model” and several models from literature, and the force data from the “ramp” was used to check the accuracy of model predictions. Additionally, the ability of the models to predict the force response on a reloading of the specimen was assessed. The “adaptive QLV model” based on this new approach predicts collagen behavior comparably to or better than existing models, with much less computation. PMID:17499254
Classifying linearly shielded modified gravity models in effective field theory.
Lombriser, Lucas; Taylor, Andy
2015-01-23
We study the model space generated by the time-dependent operator coefficients in the effective field theory of the cosmological background evolution and perturbations of modified gravity and dark energy models. We identify three classes of modified gravity models that reduce to Newtonian gravity on the small scales of linear theory. These general classes contain enough freedom to simultaneously admit a matching of the concordance model background expansion history. In particular, there exists a large model space that mimics the concordance model on all linear quasistatic subhorizon scales as well as in the background evolution. Such models also exist when restricting the theory space to operators introduced in Horndeski scalar-tensor gravity. We emphasize that whereas the partially shielded scenarios might be of interest to study in connection with tensions between large and small scale data, with conventional cosmological probes, the ability to distinguish the fully shielded scenarios from the concordance model on near-horizon scales will remain limited by cosmic variance. Novel tests of the large-scale structure remedying this deficiency and accounting for the full covariant nature of the alternative gravitational theories, however, might yield further insights on gravity in this regime.
Disorder and Quantum Chromodynamics -- Non-Linear σ Models
NASA Astrophysics Data System (ADS)
Guhr, Thomas; Wilke, Thomas
2001-10-01
The statistical properties of Quantum Chromodynamics (QCD) show universal features which can be modeled by random matrices. This has been established in detailed analyses of data from lattice gauge calculations. Moreover, systematic deviations were found which link QCD to disordered systems in condensed matter physics. To furnish these empirical findings with analytical arguments, we apply and extend the methods developed in disordered systems to construct a non-linear σ model for the spectral correlations in QCD. Our goal is to derive connections to other low-energy effective theories, such as the Nambu-Jona-Lasinio model, and to chiral perturbation theory.
Disorder and Quantum Chromodynamics - Non-Linear σ Models
NASA Astrophysics Data System (ADS)
Guhr, Thomas; Wilke, Thomas
The statistical properties of Quantum Chromodynamics (QCD) show universal features which can be modeled by random matrices. This has been established in detailed analyses of data from lattice gauge calculations. Moreover, systematic deviations were found which link QCD to disordered systems in condensed matter physics. To furnish these empirical findings with analytical arguments, we apply and extend the methods developed in disordered systems to construct a non-linear σ model for the spectral correlations in QCD. Our goal is to derive connections to other low-energy effective theories, such as the Nambu-Jona-Lasinio model, and to chiral perturbation theory.
Residuals analysis of the generalized linear models for longitudinal data.
Chang, Y C
2000-05-30
The generalized estimation equation (GEE) method, one of the generalized linear models for longitudinal data, has been used widely in medical research. However, the related sensitivity analysis problem has not been explored intensively. One of the possible reasons for this was due to the correlated structure within the same subject. We showed that the conventional residuals plots for model diagnosis in longitudinal data could mislead a researcher into trusting the fitted model. A non-parametric method, named the Wald-Wolfowitz run test, was proposed to check the residuals plots both quantitatively and graphically. The rationale proposedin this paper is well illustrated with two real clinical studies in Taiwan.
Mining Knowledge from Multiple Criteria Linear Programming Models
NASA Astrophysics Data System (ADS)
Zhang, Peng; Zhu, Xingquan; Li, Aihua; Zhang, Lingling; Shi, Yong
As a promising data mining tool, Multiple Criteria Linear Programming (MCLP) has been widely used in business intelligence. However, a possible limitation of MCLP is that it generates unexplainable black-box models which can only tell us results without reasons. To overcome this shortage, in this paper, we propose a Knowledge Mining strategy which mines from black-box MCLP models to get explainable and understandable knowledge. Different from the traditional Data Mining strategy which focuses on mining knowledge from data, this Knowledge Mining strategy provides a new vision of mining knowledge from black-box models, which can be taken as a special topic of “Intelligent Knowledge Management”.
Graphical tools for model selection in generalized linear models.
Murray, K; Heritier, S; Müller, S
2013-11-10
Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process.
MAGDM linear-programming models with distinct uncertain preference structures.
Xu, Zeshui S; Chen, Jian
2008-10-01
Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.
Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
A Linear Stochastic Dynamical Model of ENSO. Part II: Analysis.
NASA Astrophysics Data System (ADS)
Thompson, C. J.; Battisti, D. S.
2001-02-01
In this study the behavior of a linear, intermediate model of ENSO is examined under stochastic forcing. The model was developed in a companion paper (Part I) and is derived from the Zebiak-Cane ENSO model. Four variants of the model are used whose stabilities range from slightly damped to moderately damped. Each model is run as a simulation while being perturbed by noise that is uncorrelated (white) in space and time. The statistics of the model output show the moderately damped models to be more realistic than the slightly damped models. The moderately damped models have power spectra that are quantitatively quite similar to observations, and a seasonal pattern of variance that is qualitatively similar to observations. All models produce ENSOs that are phase locked to the annual cycle, and all display the `spring barrier' characteristic in their autocorrelation patterns, though in the models this `barrier' occurs during the summer and is less intense than in the observations (inclusion of nonlinear effects is shown to partially remedy this deficiency). The more realistic models also show a decadal variability in the lagged autocorrelation pattern that is qualitatively similar to observations.Analysis of the models shows that the greatest part of the variability comes from perturbations that project onto the first singular vector, which then grow rapidly into the ENSO mode. Essentially, the model output represents many instances of the ENSO mode, with random phase and amplitude, stimulated by the noise through the optimal transient growth of the singular vectors.The limit of predictability for each model is calculated and it is shown that the more realistic (moderately damped) models have worse potential predictability (9-15 months) than the deterministic chaotic models that have been studied widely in the literature. The predictability limits are strongly correlated with the stability of the models' ENSO mode-the more highly damped models having much shorter
Direct evidence for continuous linear kinetics in the low-temperature degradation of Y-TZP.
Keuper, M; Eder, K; Berthold, C; Nickel, K G
2013-01-01
The kinetics of the tetragonal to monoclinic (t-m) transformation of zirconia in a hydrous environment at 134°C and 3 bar pressure was studied. As surface X-ray diffraction, which is conventionally used to explore the progress, has a very limited depth of information, it distorts the quantitative results in a layer-on-layer situation and by itself is ill suited for this reason. Analyzing cross sections is more suitable; therefore, focused ion beam techniques were used to prepare artifact-free cuts. The material was subsequently investigated by scanning electron microscopy, electron backscatter diffraction and Raman spectroscopy. Only the combination of methods makes it possible to resolve the quantifiable details of the process. The transformation starts in the near-surface areas, forms a layer, and the growth of this layer proceeds into the bulk material following a simple linear time law (0.0624 μm h(-1) for material in the chosen condition), without apparent retardation or limit. The progress yields a gradientless layer with a fixed amount of residual tetragonal zirconia (~27% for 3Y-TZP in the present conditions) separated from unaffected material by a boundary, which has a roughness only in the grain size range. The kinetics indicates a reaction rate control, where the hydration reaction is the key factor, but is modified by the stepwise access of water to the reaction front opened by the autocatalytic transformation of zirconia with a critical hydration level.
Flood Nowcasting With Linear Catchment Models, Radar and Kalman Filters
NASA Astrophysics Data System (ADS)
Pegram, Geoff; Sinclair, Scott
A pilot study using real time rainfall data as input to a parsimonious linear distributed flood forecasting model is presented. The aim of the study is to deliver an operational system capable of producing flood forecasts, in real time, for the Mgeni and Mlazi catchments near the city of Durban in South Africa. The forecasts can be made at time steps which are of the order of a fraction of the catchment response time. To this end, the model is formulated in Finite Difference form in an equation similar to an Auto Regressive Moving Average (ARMA) model; it is this formulation which provides the required computational efficiency. The ARMA equation is a discretely coincident form of the State-Space equations that govern the response of an arrangement of linear reservoirs. This results in a functional relationship between the reservoir response con- stants and the ARMA coefficients, which guarantees stationarity of the ARMA model. Input to the model is a combined "Best Estimate" spatial rainfall field, derived from a combination of weather RADAR and Satellite rainfield estimates with point rain- fall given by a network of telemetering raingauges. Several strategies are employed to overcome the uncertainties associated with forecasting. Principle among these are the use of optimal (double Kalman) filtering techniques to update the model states and parameters in response to current streamflow observations and the application of short term forecasting techniques to provide future estimates of the rainfield as input to the model.
A comparison of linear and non-linear data assimilation methods using the NEMO ocean model
NASA Astrophysics Data System (ADS)
Kirchgessner, Paul; Tödter, Julian; Nerger, Lars
2015-04-01
The assimilation behavior of the widely used LETKF is compared with the Equivalent Weight Particle Filter (EWPF) in a data assimilation application with an idealized configuration of the NEMO ocean model. The experiments show how the different filter methods behave when they are applied to a realistic ocean test case. The LETKF is an ensemble-based Kalman filter, which assumes Gaussian error distributions and hence implicitly requires model linearity. In contrast, the EWPF is a fully nonlinear data assimilation method that does not rely on a particular error distribution. The EWPF has been demonstrated to work well in highly nonlinear situations, like in a model solving a barotropic vorticity equation, but it is still unknown how the assimilation performance compares to ensemble Kalman filters in realistic situations. For the experiments, twin assimilation experiments with a square basin configuration of the NEMO model are performed. The configuration simulates a double gyre, which exhibits significant nonlinearity. The LETKF and EWPF are both implemented in PDAF (Parallel Data Assimilation Framework, http://pdaf.awi.de), which ensures identical experimental conditions for both filters. To account for the nonlinearity, the assimilation skill of the two methods is assessed by using different statistical metrics, like CRPS and Histograms.
NASA Astrophysics Data System (ADS)
Zhou, Bin; Hou, Ming-Zhe; Duan, Guang-Ren
2013-04-01
This article is concerned with L ∞ and L 2 semi-global stabilisation of continuous-time periodic linear systems with bounded controls. Two problems, namely L ∞ semi-global stabilisation with controls having bounded magnitude and L 2 semi-global stabilisation with controls having bounded energy, are solved based on solutions to a class of periodic Lyapunov differential equations (PLDEs) resulting from the problem of minimal energy control with guaranteed convergence rate. Under the assumption that the open-loop system is (asymptotically) null controllable with constrained controls, periodic feedback are established to solve the concerned problems. The proposed PLDE-based approaches possess the advantage that the resulting controllers are easy to implement since the designers need only to solve a linear differential equation. A numerical example is worked out to illustrate the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Qin, Chunbin; Zhang, Huaguang; Luo, Yanhong
2014-05-01
In this paper, a novel theoretic formulation based on adaptive dynamic programming (ADP) is developed to solve online the optimal tracking problem of the continuous-time linear system with unknown dynamics. First, the original system dynamics and the reference trajectory dynamics are transformed into an augmented system. Then, under the same performance index with the original system dynamics, an augmented algebraic Riccati equation is derived. Furthermore, the solutions for the optimal control problem of the augmented system are proven to be equal to the standard solutions for the optimal tracking problem of the original system dynamics. Moreover, a new online algorithm based on the ADP technique is presented to solve the optimal tracking problem of the linear system with unknown system dynamics. Finally, simulation results are given to verify the effectiveness of the theoretic results.
Granita; Bahar, A.
2015-03-09
This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.
NASA Astrophysics Data System (ADS)
Granita, Bahar, A.
2015-03-01
This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.
Using Quartile-Quartile Lines as Linear Models
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2015-01-01
This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…
Modelling and Resource Allocation of Linearly Restricted Operating Systems.
1979-12-01
services thus rendered are the products . Clearly, in a limited resource situation, how best to dispense the available resources to achieve some...Preliminary Although a linear programming model for an economic problem had been developed as early as 1939 by the Russian mathematician L. Kantorovich...individual user programs) to achieve productions (computations). The purpose is then to devise a way (A plan) to allocate those available memory spaces
LINEAR MODELS FOR MANAGING SOURCES OF GROUNDWATER POLLUTION.
Gorelick, Steven M.; Gustafson, Sven-Ake; ,
1984-01-01
Mathematical models for the problem of maintaining a specified groundwater quality while permitting solute waste disposal at various facilities distributed over space are discussed. The pollutants are assumed to be chemically inert and their concentrations in the groundwater are governed by linear equations for advection and diffusion. The aim is to determine a disposal policy which maximises the total amount of pollutants released during a fixed time T while meeting the condition that the concentration everywhere is below prescribed levels.
NON-LINEAR MODELING OF THE RHIC INTERACTION REGIONS.
TOMAS,R.FISCHER,W.JAIN,A.LUO,Y.PILAT,F.
2004-07-05
For RHIC's collision lattices the dominant sources of transverse non-linearities are located in the interaction regions. The field quality is available for most of the magnets in the interaction regions from the magnetic measurements, or from extrapolations of these measurements. We discuss the implementation of these measurements in the MADX models of the Blue and the Yellow rings and their impact on beam stability.
Using Quartile-Quartile Lines as Linear Models
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2015-01-01
This article introduces the notion of the quartile-quartile line as an alternative to the regression line and the median-median line to produce a linear model based on a set of data. It is based on using the first and third quartiles of a set of (x, y) data. Dynamic spreadsheets are used as exploratory tools to compare the different approaches and…
Feature Modeling in Underwater Environments Using Sparse Linear Combinations
2010-01-01
waters . Optics Express, 16(13), 2008. 2, 3 [9] J. Jaflfe. Monte carlo modeling of underwate-image forma- tion: Validity of the linear and small-angle... turbid water , etc), we would like to determine if these two images contain the same (or similar) object(s). One approach is as follows: 1. Detect...nearest neighbor methods on extracted feature descriptors This methodology works well for clean, out-of- water images, however, when imaging underwater
Electromagnetic axial anomaly in a generalized linear sigma model
NASA Astrophysics Data System (ADS)
Fariborz, Amir H.; Jora, Renata
2017-06-01
We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.
Credibility analysis of risk classes by generalized linear model
NASA Astrophysics Data System (ADS)
Erdemir, Ovgucan Karadag; Sucu, Meral
2016-06-01
In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.
Continuous-time model of structural balance.
Marvel, Seth A; Kleinberg, Jon; Kleinberg, Robert D; Strogatz, Steven H
2011-02-01
It is not uncommon for certain social networks to divide into two opposing camps in response to stress. This happens, for example, in networks of political parties during winner-takes-all elections, in networks of companies competing to establish technical standards, and in networks of nations faced with mounting threats of war. A simple model for these two-sided separations is the dynamical system dX/dt = X(2), where X is a matrix of the friendliness or unfriendliness between pairs of nodes in the network. Previous simulations suggested that only two types of behavior were possible for this system: Either all relationships become friendly or two hostile factions emerge. Here we prove that for generic initial conditions, these are indeed the only possible outcomes. Our analysis yields a closed-form expression for faction membership as a function of the initial conditions and implies that the initial amount of friendliness in large social networks (started from random initial conditions) determines whether they will end up in intractable conflict or global harmony.
A semivarying joint model for longitudinal binary and continuous outcomes
Kürüm, Esra; Hughes, John; Li, Runze
2016-01-01
Semivarying models extend varying coefficient models by allowing some regression coefficients to be constant with respect to the underlying covariate(s). In this paper we develop a semivarying joint modelling framework for estimating the time-varying association between two intensively measured longitudinal response: a continuous one and a binary one. To overcome the major challenge of jointly modelling these responses, namely, the lack of a natural multivariate distribution, we introduce a Gaussian latent variable underlying the binary response. Then we decompose the model into two components: a marginal model for the continuous response, and a conditional model for the binary response given the continuous response. We develop a two-stage estimation procedure and discuss the asymptotic normality of the resulting estimators. We assess the finite-sample performance of our procedure using a simulation study, and we illustrate our method by analyzing binary and continuous responses from the Women’s Interagency HIV Study. PMID:27667895
Validating a quasi-linear transport model versus nonlinear simulations
NASA Astrophysics Data System (ADS)
Casati, A.; Bourdelle, C.; Garbet, X.; Imbeaux, F.; Candy, J.; Clairet, F.; Dif-Pradalier, G.; Falchetto, G.; Gerbaud, T.; Grandgirard, V.; Gürcan, Ö. D.; Hennequin, P.; Kinsey, J.; Ottaviani, M.; Sabot, R.; Sarazin, Y.; Vermare, L.; Waltz, R. E.
2009-08-01
In order to gain reliable predictions on turbulent fluxes in tokamak plasmas, physics based transport models are required. Nonlinear gyrokinetic electromagnetic simulations for all species are still too costly in terms of computing time. On the other hand, interestingly, the quasi-linear approximation seems to retain the relevant physics for fairly reproducing both experimental results and nonlinear gyrokinetic simulations. Quasi-linear fluxes are made of two parts: (1) the quasi-linear response of the transported quantities and (2) the saturated fluctuating electrostatic potential. The first one is shown to follow well nonlinear numerical predictions; the second one is based on both nonlinear simulations and turbulence measurements. The resulting quasi-linear fluxes computed by QuaLiKiz (Bourdelle et al 2007 Phys. Plasmas 14 112501) are shown to agree with the nonlinear predictions when varying various dimensionless parameters, such as the temperature gradients, the ion to electron temperature ratio, the dimensionless collisionality, the effective charge and ranging from ion temperature gradient to trapped electron modes turbulence.
CANFIS: A non-linear regression procedure to produce statistical air-quality forecast models
Burrows, W.R.; Montpetit, J.; Pudykiewicz, J.
1997-12-31
Statistical models for forecasts of environmental variables can provide a good trade-off between significance and precision in return for substantial saving of computer execution time. Recent non-linear regression techniques give significantly increased accuracy compared to traditional linear regression methods. Two are Classification and Regression Trees (CART) and the Neuro-Fuzzy Inference System (NFIS). Both can model predict and distributions, including the tails, with much better accuracy than linear regression. Given a learning data set of matched predict and predictors, CART regression produces a non-linear, tree-based, piecewise-continuous model of the predict and data. Its variance-minimizing procedure optimizes the task of predictor selection, often greatly reducing initial data dimensionality. NFIS reduces dimensionality by a procedure known as subtractive clustering but it does not of itself eliminate predictors. Over-lapping coverage in predictor space is enhanced by NFIS with a Gaussian membership function for each cluster component. Coefficients for a continuous response model based on the fuzzified cluster centers are obtained by a least-squares estimation procedure. CANFIS is a two-stage data-modeling technique that combines the strength of CART to optimize the process of selecting predictors from a large pool of potential predictors with the modeling strength of NFIS. A CANFIS model requires negligible computer time to run. CANFIS models for ground-level O{sub 3}, particulates, and other pollutants will be produced for each of about 100 Canadian sites. The air-quality models will run twice daily using a small number of predictors isolated from a large pool of upstream and local Lagrangian potential predictors.
[Linear mixed modeling of branch biomass for Korean pine plantation].
Dong, Li-Hu; Li, Feng-Ri; Jia, Wei-Wei
2013-12-01
Based on the measurement of 3643 branch biomass samples of 60 Korean pine (Pinus koraiensis) trees from Mengjiagang Forest Farm, Heilongjiang Province, all subset regressions techniques were used to develop the branch biomass model (branch, foliage, and total biomass models). The optimal base model of branch biomass was developed as lnw = k1 + k2 lnL(b) + k3 lnD(b). Then, linear mixed models were developed based on PROC MIXED of SAS 9.3 software, and evaluated with AIC, BIC, Log Likelihood and Likelihood ratio tests. The results showed that the foliage and total biomass models with parameters k1, k2 and k3 as mixed effects showed the best performance. The branch biomass model with parameters k5 and k2 as mixed effects showed the best performance. Finally, we evaluated the optimal base model and the mixed model of branch biomass. Model validation confirmed that the mixed model was better than the optimal base model. The mixed model with random parameters could not only provide more accurate and precise prediction, but also showed the individual difference based on variance-covariance structure.
Sahin, Rubina; Tapadia, Kavita
2015-01-01
The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG < 0) and endothermic (ΔH > 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.
On the Development of Parameterized Linear Analytical Longitudinal Airship Models
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.
2008-01-01
In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.
Daily runoff prediction using the linear and non-linear models.
Sharifi, Alireza; Dinpashoh, Yagob; Mirabbasi, Rasoul
2017-08-01
Runoff prediction, as a nonlinear and complex process, is essential for designing canals, water management and planning, flood control and predicting soil erosion. There are a number of techniques for runoff prediction based on the hydro-meteorological and geomorphological variables. In recent years, several soft computing techniques have been developed to predict runoff. There are some challenging issues in runoff modeling including the selection of appropriate inputs and determination of the optimum length of training and testing data sets. In this study, the gamma test (GT), forward selection and factor analysis were used to determine the best input combination. In addition, GT was applied to determine the optimum length of training and testing data sets. Results showed the input combination based on the GT method with five variables has better performance than other combinations. For modeling, among four techniques: artificial neural networks, local linear regression, an adaptive neural-based fuzzy inference system and support vector machine (SVM), results indicated the performance of the SVM model is better than other techniques for runoff prediction in the Amameh watershed.
Modelling human balance using switched systems with linear feedback control
Kowalczyk, Piotr; Glendinning, Paul; Brown, Martin; Medrano-Cerda, Gustavo; Dallali, Houman; Shapiro, Jonathan
2012-01-01
We are interested in understanding the mechanisms behind and the character of the sway motion of healthy human subjects during quiet standing. We assume that a human body can be modelled as a single-link inverted pendulum, and the balance is achieved using linear feedback control. Using these assumptions, we derive a switched model which we then investigate. Stable periodic motions (limit cycles) about an upright position are found. The existence of these limit cycles is studied as a function of system parameters. The exploration of the parameter space leads to the detection of multi-stability and homoclinic bifurcations. PMID:21697168
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Wang, C.
1992-01-01
The convergence of solutions to the discrete- or sampled-time linear quadratic regulator problem and associated Riccati equation for infinite-dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero(infinity) is established. Both the finite-and infinite-time horizon problems are studied. In the finite-time horizon case, strong continuity of the operators that define the control system and performance index, together with a stability and consistency condition on the sampling scheme are required. For the infinite-time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary or delay system, and a flexible beam are presented and discussed.
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Wang, C.
1990-01-01
The convergence of solutions to the discrete or sampled time linear quadratic regulator problem and associated Riccati equation for infinite dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero (infinity) is established. Both the finite and infinite time horizon problems are studied. In the finite time horizon case, strong continuity of the operators which define the control system and performance index together with a stability and consistency condition on the sampling scheme are required. For the infinite time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary of delay system, and a flexible beam are presented and discussed.
Non-linear model for compression tests on articular cartilage.
Grillo, Alfio; Guaily, Amr; Giverso, Chiara; Federico, Salvatore
2015-07-01
Hydrated soft tissues, such as articular cartilage, are often modeled as biphasic systems with individually incompressible solid and fluid phases, and biphasic models are employed to fit experimental data in order to determine the mechanical and hydraulic properties of the tissues. Two of the most common experimental setups are confined and unconfined compression. Analytical solutions exist for the unconfined case with the linear, isotropic, homogeneous model of articular cartilage, and for the confined case with the non-linear, isotropic, homogeneous model. The aim of this contribution is to provide an easily implementable numerical tool to determine a solution to the governing differential equations of (homogeneous and isotropic) unconfined and (inhomogeneous and isotropic) confined compression under large deformations. The large-deformation governing equations are reduced to equivalent diffusive equations, which are then solved by means of finite difference (FD) methods. The solution strategy proposed here could be used to generate benchmark tests for validating complex user-defined material models within finite element (FE) implementations, and for determining the tissue's mechanical and hydraulic properties from experimental data.
Application of linear gauss pseudospectral method in model predictive control
NASA Astrophysics Data System (ADS)
Yang, Liang; Zhou, Hao; Chen, Wanchun
2014-03-01
This paper presents a model predictive control(MPC) method aimed at solving the nonlinear optimal control problem with hard terminal constraints and quadratic performance index. The method combines the philosophies of the nonlinear approximation model predictive control, linear quadrature optimal control and Gauss Pseudospectral method. The current control is obtained by successively solving linear algebraic equations transferred from the original problem via linearization and the Gauss Pseudospectral method. It is not only of high computational efficiency since it does not need to solve nonlinear programming problem, but also of high accuracy though there are a few discrete points. Therefore, this method is suitable for on-board applications. A design of terminal impact with a specified direction is carried out to evaluate the performance of this method. Augmented PN guidance law in the three-dimensional coordinate system is applied to produce the initial guess. And various cases for target with straight-line movements are employed to demonstrate the applicability in different impact angles. Moreover, performance of the proposed method is also assessed by comparison with other guidance laws. Simulation results indicate that this method is not only of high computational efficiency and accuracy, but also applicable in the framework of guidance design.
Wavefront Sensing for WFIRST with a Linear Optical Model
NASA Technical Reports Server (NTRS)
Jurling, Alden S.; Content, David A.
2012-01-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
Wavefront sensing for WFIRST with a linear optical model
NASA Astrophysics Data System (ADS)
Jurling, Alden S.; Content, David A.
2012-09-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
Optimal ordering policies for continuous review perishable inventory models.
Weiss, H J
1980-01-01
This paper extends the notions of perishable inventory models to the realm of continuous review inventory systems. The traditional perishable inventory costs of ordering, holding, shortage or penalty, disposal and revenue are incorporated into the continuous review framework. The type of policy that is optimal with respect to long run average expected cost is presented for both the backlogging and lost-sales models. In addition, for the lost-sales model the cost function is presented and analyzed.
Models of protein linear molecular motors for dynamic nanodevices.
Fulga, Florin; Nicolau, Dan V; Nicolau, Dan V
2009-02-01
Protein molecular motors are natural nano-machines that convert the chemical energy from the hydrolysis of adenosine triphosphate into mechanical work. These efficient machines are central to many biological processes, including cellular motion, muscle contraction and cell division. The remarkable energetic efficiency of the protein molecular motors coupled with their nano-scale has prompted an increasing number of studies focusing on their integration in hybrid micro- and nanodevices, in particular using linear molecular motors. The translation of these tentative devices into technologically and economically feasible ones requires an engineering, design-orientated approach based on a structured formalism, preferably mathematical. This contribution reviews the present state of the art in the modelling of protein linear molecular motors, as relevant to the future design-orientated development of hybrid dynamic nanodevices.
Repopulation Kinetics and the Linear-Quadratic Model
NASA Astrophysics Data System (ADS)
O'Rourke, S. F. C.; McAneney, H.; Starrett, C.; O'Sullivan, J. M.
2009-08-01
The standard Linear-Quadratic (LQ) survival model for radiotherapy is used to investigate different schedules of radiation treatment planning for advanced head and neck cancer. We explore how these treament protocols may be affected by different tumour repopulation kinetics between treatments. The laws for tumour cell repopulation include the logistic and Gompertz models and this extends the work of Wheldon et al. [1], which was concerned with the case of exponential repopulation between treatments. Treatment schedules investigated include standarized and accelerated fractionation. Calculations based on the present work show, that even with growth laws scaled to ensure that the repopulation kinetics for advanced head and neck cancer are comparable, considerable variation in the survival fraction to orders of magnitude emerged. Calculations show that application of the Gompertz model results in a significantly poorer prognosis for tumour eradication. Gaps in treatment also highlight the differences in the LQ model with the effect of repopulation kinetics included.
A Linear Stratified Ocean Model of the Equatorial Undercurrent
NASA Astrophysics Data System (ADS)
McCreary, J. P.
1981-01-01
A linear stratified ocean model is used to study the wind-driven response of the equatorial ocean. The model is an extension of the Lighthill (1969) model that allows the diffusion of heat and momentum into the deeper ocean, and so can develop non-trivial steady solutions. To retain the ability to expand solutions into sums of vertical normal modes, mixing coefficients must be inversely proportional to the square of the background Vaisala frequency. The model is also similar to the earlier homogeneous ocean model of Stommel (1960). He extended Ekman dynamics to the equator by allowing his model to generate a barotropic pressure field. The present model differs in that the presence of stratification allows the generation of a baroclinic pressure field as well. The most important result of this paper is that linear theory can produce a realistic equatorial current structure. The model Undercurrent has a reasonable width and depth scale. There is westward flow both above and below the Undercurrent. The meridional circulation conforms to the 'classical' picture suggested by Cromwell (1953). Unlike the Stommel solution, the response here is less sensitive to variations of parameters. Ocean boundaries are not necessary for the existence of the Undercurrent but are necessary for the existence of the deeper Equatorial Intermediate Current. The radiation of equatorially trapped Rossby and Kelvin waves is essential to the development of a realistic Undercurrent. Because the system supports the existence of these waves, low-order vertical modes can very nearly adjust to Sverdrup balance (defined below), which in a bounded ocean and for winds without curl is a state of rest. As a result, higher-order vertical modes are much more visible in the total solution. This property accounts for the surface trapping and narrow width scale of the equatorial currents. The high-order modes tend to be in Yoshida balance (defined below) and generate the characteristic meridional circulation
Generalized linear mixed model for segregation distortion analysis.
Zhan, Haimao; Xu, Shizhong
2011-11-11
Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F(2) mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals.
Finite Population Correction for Two-Level Hierarchical Linear Models.
Lai, Mark H C; Kwok, Oi-Man; Hsiao, Yu-Yu; Cao, Qian
2017-03-16
The research literature has paid little attention to the issue of finite population at a higher level in hierarchical linear modeling. In this article, we propose a method to obtain finite-population-adjusted standard errors of Level-1 and Level-2 fixed effects in 2-level hierarchical linear models. When the finite population at Level-2 is incorrectly assumed as being infinite, the standard errors of the fixed effects are overestimated, resulting in lower statistical power and wider confidence intervals. The impact of ignoring finite population correction is illustrated by using both a real data example and a simulation study with a random intercept model and a random slope model. Simulation results indicated that the bias in the unadjusted fixed-effect standard errors was substantial when the Level-2 sample size exceeded 10% of the Level-2 population size; the bias increased with a larger intraclass correlation, a larger number of clusters, and a larger average cluster size. We also found that the proposed adjustment produced unbiased standard errors, particularly when the number of clusters was at least 30 and the average cluster size was at least 10. We encourage researchers to consider the characteristics of the target population for their studies and adjust for finite population when appropriate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Generalized linear mixed model for segregation distortion analysis
2011-01-01
Background Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. Results We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F2 mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Conclusions Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals. PMID:22078575
THE SEPARATION OF URANIUM ISOTOPES BY GASEOUS DIFFUSION: A LINEAR PROGRAMMING MODEL,
URANIUM, ISOTOPE SEPARATION), (*GASEOUS DIFFUSION SEPARATION, LINEAR PROGRAMMING ), (* LINEAR PROGRAMMING , GASEOUS DIFFUSION SEPARATION), MATHEMATICAL MODELS, GAS FLOW, NUCLEAR REACTORS, OPERATIONS RESEARCH
Linear Modeling and Evaluation of Controls on Flow Response in Western Post-Fire Watersheds
NASA Astrophysics Data System (ADS)
Saxe, S.; Hogue, T. S.; Hay, L.
2015-12-01
This research investigates the impact of wildfires on watershed flow regimes throughout the western United States, specifically focusing on evaluation of fire events within specified subregions and determination of the impact of climate and geophysical variables in post-fire flow response. Fire events were collected through federal and state-level databases and streamflow data were collected from U.S. Geological Survey stream gages. 263 watersheds were identified with at least 10 years of continuous pre-fire daily streamflow records and 5 years of continuous post-fire daily flow records. For each watershed, percent changes in runoff ratio (RO), annual seven day low-flows (7Q2) and annual seven day high-flows (7Q10) were calculated from pre- to post-fire. Numerous independent variables were identified for each watershed and fire event, including topographic, land cover, climate, burn severity, and soils data. The national watersheds were divided into five regions through K-clustering and a lasso linear regression model, applying the Leave-One-Out calibration method, was calculated for each region. Nash-Sutcliffe Efficiency (NSE) was used to determine the accuracy of the resulting models. The regions encompassing the United States along and west of the Rocky Mountains, excluding the coastal watersheds, produced the most accurate linear models. The Pacific coast region models produced poor and inconsistent results, indicating that the regions need to be further subdivided. Presently, RO and HF response variables appear to be more easily modeled than LF. Results of linear regression modeling showed varying importance of watershed and fire event variables, with conflicting correlation between land cover types and soil types by region. The addition of further independent variables and constriction of current variables based on correlation indicators is ongoing and should allow for more accurate linear regression modeling.
ERIC Educational Resources Information Center
Cheong, Yuk Fai; Kamata, Akihito
2013-01-01
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
ERIC Educational Resources Information Center
Cheong, Yuk Fai; Kamata, Akihito
2013-01-01
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.
ERIC Educational Resources Information Center
Belgard, Maria R.; Min, Leo Yoon-Gee
An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…
Model predictive control of a combined heat and power plant using local linear models
Kikstra, J.F.; Roffel, B.; Schoen, P.
1998-10-01
Model predictive control has been applied to control of a combined heat and power plant. One of the main features of this plant is that it exhibits nonlinear process behavior due to large throughput swings. In this application, the operating window of the plant has been divided into a number of smaller windows in which the nonlinear process behavior has been approximated by linear behavior. For each operating window, linear step weight models were developed from a detailed nonlinear first principles model, and the model prediction is calculated based on interpolation between these linear models. The model output at each operating point can then be calculated from four basic linear models, and the required control action can subsequently be calculated with the standard model predictive control approach using quadratic programming.
Non Linear Force Free Field Modeling for a Pseudostreamer
NASA Astrophysics Data System (ADS)
Karna, Nishu; Savcheva, Antonia; Gibson, Sarah; Tassev, Svetlin V.
2017-08-01
In this study we present a magnetic configuration of a pseudostreamer observed on April 18, 2015 on southern west limb embedding a filament cavity. We constructed Non Linear Force Free Field (NLFFF) model using the flux rope insertion method. The NLFFF model produces the three-dimensional coronal magnetic field constrained by observed coronal structures and photospheric magnetogram. SDO/HMI magnetogram was used as an input for the model. The high spatial and temporal resolution of the SDO/AIA allows us to select best-fit models that match the observations. The MLSO/CoMP observations provide full-Sun observations of the magnetic field in the corona. The primary observables of CoMP are the four Stokes parameters (I, Q, U, V). In addition, we perform a topology analysis of the models in order to determine the location of quasi-separatrix layers (QSLs). QSLs are used as a proxy to determine where the strong electric current sheets can develop in the corona and also provide important information about the connectivity in complicated magnetic field configuration. We present the major properties of the 3D QSL and FLEDGE maps and the evolution of 3D coronal structures during the magnetofrictional process. We produce FORWARD-modeled observables from our NLFFF models and compare to a toy MHD FORWARD model and the observations.
A Structured Model Reduction Method for Linear Interconnected Systems
NASA Astrophysics Data System (ADS)
Sato, Ryo; Inoue, Masaki; Adachi, Shuichi
2016-09-01
This paper develops a model reduction method for a large-scale interconnected system that consists oflinear dynamic components. In the model reduction, we aim to preserve physical characteristics of each component. To this end, we formulate a structured model reduction problem that reduces the model order of components while preserving the feedback structure. Although there are a few conventional methods for such structured model reduction to preserve stability, they do not explicitly consider performance of the reduced-order feedback system. One of the difficulties in the problem with performance guarantee comes from nonlinearity of a feedback system to each component. The problem is essentially in a class of nonlinear optimization problems, and therefore it cannot be efficiently solved even in numerical computation. In this paper, application of an equivalent transformation and a proper approximation reduces this nonlinear problem to a problem of the weighted linear model reduction. Then, by using the weighted balanced truncation technique, we construct a reduced-order model with preserving the feedback structure to ensure small modeling error. Finally, we verify the effectiveness of the proposed method through numerical experiments.
Linear mixing model applied to AVHRR LAC data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
New holographic dark energy model with non-linear interaction
NASA Astrophysics Data System (ADS)
Oliveros, A.; Acero, Mario A.
2015-05-01
In this paper the cosmological evolution of a holographic dark energy model with a non-linear interaction between the dark energy and dark matter components in a FRW type flat universe is analysed. In this context, the deceleration parameter q and the equation state w Λ are obtained. We found that, as the square of the speed of sound remains positive, the model is stable under perturbations since early times; it also shows that the evolution of the matter and dark energy densities are of the same order for a long period of time, avoiding the so-called coincidence problem. We have also made the correspondence of the model with the dark energy densities and pressures for the quintessence and tachyon fields. From this correspondence we have reconstructed the potential of scalar fields and their dynamics.
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large
Model of intermodulation distortion in non-linear multicarrier systems
NASA Astrophysics Data System (ADS)
Frigo, Nicholas J.
1994-02-01
A heuristic model is proposed which allows calculation of the individual spectral components of the intermodulation distortion present in a non-linear system with a multicarrier input. Noting that any given intermodulation product (IMP) can only be created by a subset of the input carriers, we partition them into 'signal' carriers (which create the IMP) and 'noise' carriers, modeled as a Gaussian process. The relationship between an input signal and the statistical average of its output (averaged over the Gaussian noise) is considered to be an effective transfer function. By summing all possible combinations of signal carriers which create power at the IMP frequencies, the distortion power can be calculated exactly as a function of frequency. An analysis of clipping in lightwave CATV links for AM-VSB signals is used to introduce the model, and is compared to a series of experiments.
Groundwater evapotranspiration estimation with the help of the linear storage model
NASA Astrophysics Data System (ADS)
Kalicz, Péter; Gribovszki, Zoltán.
2010-05-01
Discharge measuring is a common method in the hydrological research. While the continuous discharge time series is determined by the rainfall, the riparian vegetation have great effect on the falling limb of the hydrograph. The information enclosed in the falling limb can be subtracted with help of the linear storage model. The initial time point of the recession has not an influence on the parameters of the model and can be fit as an exponential curve with the method of linear regression. The apparent residence time, which is calculated by the linear storage model, changes parallel with the transpiration intensity during the growing season. The evapotranspiration of the riparian zone can be estimated with the help of this strong relationship. In the first step of calculation it is necessary to determine a transpiration-free, mean residence time. This number is calculated from hydrographs during late-winter or early-spring floods, before the plants break the dormancy in temperated climate. The evapotranspiration can be expressed from the combination of the linear storage model and the water balance of the recession period. The method was tested in the fully forest covered Hidegvíz Valley experimental catchment. The 6 km2 catchment located in the Sopron Hills (Hungary) at the Austrian border region. The processed time series are measured in two neighboring sub-catchments (Farkas Valley and the Vadkan Valley). The method gives reasonable groundwater evapotranspiration values compared to other estimations.
Model light curves of linear Type II supernovae
Swartz, D.A.; Wheeler, J.C.; Harkness, R.P. )
1991-06-01
Light curves computed from hydrodynamic models of supernova are compared graphically with the average observed B and V-band light curves of linear Type II supernovae. Models are based on the following explosion scenarios: carbon deflagration within a C + O core near the Chandrasekhar mass, electron-capture-induced core collapse of an O-Ne-Mg core of the Chandrasekhar mass, and collapse of an Fe core in a massive star. A range of envelope mass, initial radius, and composition is investigated. Only a narrow range of values of these parameters are consistent with observations. Within this narrow range, most of the observed light curve properties can be obtained in part, but none of the models can reproduce the entire light curve shape and absolute magnitude over the full 200 day comparison period. The observed lack of a plateau phase is explained in terms of a combination of small envelope mass and envelope helium enhancement. The final cobalt tail phase of the light curve can be reproduced only if the mass of explosively synthesized radioactive Ni-56 is small. The results presented here, in conjunction with the observed homogeneity among individual members of the supernova subclass, argue favorably for the O-Ne-Mg core collapse mechanism as an explanation for linear Type II supernovae. The Crab Nebula may arisen from such an explosion. Carbon deflagrations may lead to brighter events like SN 1979C. 62 refs.
Adjusting power for a baseline covariate in linear models
Glueck, Deborah H.; Muller, Keith E.
2009-01-01
SUMMARY The analysis of covariance provides a common approach to adjusting for a baseline covariate in medical research. With Gaussian errors, adding random covariates does not change either the theory or the computations of general linear model data analysis. However, adding random covariates does change the theory and computation of power analysis. Many data analysts fail to fully account for this complication in planning a study. We present our results in five parts. (i) A review of published results helps document the importance of the problem and the limitations of available methods. (ii) A taxonomy for general linear multivariate models and hypotheses allows identifying a particular problem. (iii) We describe how random covariates introduce the need to consider quantiles and conditional values of power. (iv) We provide new exact and approximate methods for power analysis of a range of multivariate models with a Gaussian baseline covariate, for both small and large samples. The new results apply to the Hotelling-Lawley test and the four tests in the “univariate” approach to repeated measures (unadjusted, Huynh-Feldt, Geisser-Greenhouse, Box). The techniques allow rapid calculation and an interactive, graphical approach to sample size choice. (v) Calculating power for a clinical trial of a treatment for increasing bone density illustrates the new methods. We particularly recommend using quantile power with a new Satterthwaite-style approximation. PMID:12898543
Inverse magnetic catalysis in the linear sigma model
NASA Astrophysics Data System (ADS)
Ayala, A.; Loewe, M.; Zamora, R.
2016-05-01
We compute the critical temperature for the chiral transition in the background of a magnetic field in the linear sigma model, including the quark contribution and the thermo-magnetic effects induced on the coupling constants at one loop level. For the analysis, we go beyond mean field aproximation, by taking one loop thermo-magnetic corrections to the couplings as well as plasma screening effects for the boson's masses, expressed through the ring diagrams. We found inverse magnetic catalysis, i.e. a decreasing of the critical chiral temperature as function of the intensity of the magnetic field, which seems to be in agreement with recent results from the lattice community.
Modeling of Linear Gas Tungsten Arc Welding of Stainless Steel
NASA Astrophysics Data System (ADS)
Maran, P.; Sornakumar, T.; Sundararajan, T.
2008-08-01
A heat and fluid flow model has been developed to solve the gas tungsten arc (GTA) linear welding problem for austenitic stainless steel. The moving heat source problem associated with the electrode traverse has been simplified into an equivalent two-dimensional (2-D) transient problem. The torch residence time has been calculated from the arc diameter and torch speed. The mathematical formulation considers buoyancy, electromagnetic induction, and surface tension forces. The governing equations have been solved by the finite volume method. The temperature and velocity fields have been determined. The theoretical predictions for weld bead geometry are in good agreement with experimental measurements.
Imbedding linear regressions in models for factor crossing
NASA Astrophysics Data System (ADS)
Santos, Carla; Nunes, Célia; Dias, Cristina; Varadinov, Maria; Mexia, João T.
2016-12-01
Given u factors with J1, …, Ju levels we are led to test their effects and interactions. For this we consider an orthogonal partition of Rn, with n =∏l=1uJl, in subspaces associated with the sets of factors. The space corresponding to the set C will have density g (C )=∏l∈C(Jl-1) so that g({1, …, u}) will be much larger than the other number of degrees of freedom when Jl > 2, l = 1, …, u This fact may be used to enrich these models imbedding in them linear regressions.
Linear unmixing using endmember subspaces and physics based modeling
NASA Astrophysics Data System (ADS)
Gillis, David; Bowles, Jeffrey; Ientilucci, Emmett J.; Messinger, David W.
2007-09-01
One of the biggest issues with the Linear Mixing Model (LMM) is that it is implicitly assumed that each of the individual material components throughout the scene may be described using a single dimension (e.g. an endmember vector). In reality, individual pixels corresponding to the same general material class can exhibit a large degree of variation within a given scene. This is especially true in broad background classes such as forests, where the single dimension assumption clearly fails. In practice, the only way to account for the multidimensionality of the class is to choose multiple (very similar) endmembers, each of which represents some part of the class. To address these issues, we introduce the endmember subgroup model, which generalizes the notion of an 'endmember vector' to an 'endmember subspace'. In this model, spectra in a given hyperspectral scene are decomposed as a sum of constituent materials; however, each material is represented by some multidimensional subspace (instead of a single vector). The dimensionality of the subspace will depend on the within-class variation seen in the image. The endmember subgroups can be determined automatically from the data, or can use physics-based modeling techniques to include 'signature subspaces', which are included in the endmember subgroups. In this paper, we give an overview of the subgroup model; discuss methods for determining the endmember subgroups for a given image, and present results showing how the subgroup model improves upon traditional single endmember linear mixing. We also include results that use the 'signature subspace' approach to identifying mixed-pixel targets in HYDICE imagery.
The Hybrid Model for Implementing the Continuing Education Mission.
ERIC Educational Resources Information Center
Hentschel, Doe
1991-01-01
Models through which higher education provides outreach include centralized, decentralized, and hybrid. The latter, academically integrated and administratively decentralized, meshes continuing education programs with the academic mission while maximizing cost effectiveness. (SK)
NASA Astrophysics Data System (ADS)
Wang, Liuping; Gan, Lu
2013-08-01
Linear controllers with gain scheduling have been successfully used in the control of nonlinear systems for the past several decades. This paper proposes the design of gain scheduled continuous-time model predictive controller with constraints. Using induction machine as an illustrative example, the paper will show the four steps involved in the design of a gain scheduled predictive controller: (i) linearisation of a nonlinear plant according to operating conditions; (ii) the design of linear predictive controllers for the family of linear models; (iii) gain scheduled predictive control law that will optimise a multiple model objective function with constraints, which will also ensure smooth transitions (i.e. bumpless transfer) between the predictive controllers; (iv) experimental validation of the gain scheduled predictive control system with constraints.
Wang-Landau sampling with logarithmic windows for continuous models.
Xie, Y L; Chu, P; Wang, Y L; Chen, J P; Yan, Z B; Liu, J-M
2014-01-01
We present a modified Wang-Landau sampling (MWLS) for continuous statistical models by partitioning the energy space into a set of windows with logarithmically shrinking width. To demonstrate its necessity and advantages, we apply this sampling to several continuous models, including the two-dimensional square XY spin model, triangular J1-J2 spin model, and Lennard-Jones cluster model. Given a finite number of bins for partitioning the energy space, the conventional Wang-Landau sampling may not generate sufficiently accurate density of states (DOS) around the energy boundaries. However, it is demonstrated that much more accurate DOS can be obtained by this MWLS, and thus a precise evaluation of the thermodynamic behaviors of the continuous models at extreme low temperature (kBT<0.1) becomes accessible. The present algorithm also allows efficient computation besides the highly reliable data sampling.
Filtering nonlinear dynamical systems with linear stochastic models
NASA Astrophysics Data System (ADS)
Harlim, J.; Majda, A. J.
2008-06-01
An important emerging scientific issue is the real time filtering through observations of noisy signals for nonlinear dynamical systems as well as the statistical accuracy of spatio-temporal discretizations for filtering such systems. From the practical standpoint, the demand for operationally practical filtering methods escalates as the model resolution is significantly increased. For example, in numerical weather forecasting the current generation of global circulation models with resolution of 35 km has a total of billions of state variables. Numerous ensemble based Kalman filters (Evensen 2003 Ocean Dyn. 53 343-67 Bishop et al 2001 Mon. Weather Rev. 129 420-36 Anderson 2001 Mon. Weather Rev. 129 2884-903 Szunyogh et al 2005 Tellus A 57 528-45 Hunt et al 2007 Physica D 230 112-26) show promising results in addressing this issue; however, all these methods are very sensitive to model resolution, observation frequency, and the nature of the turbulent signals when a practical limited ensemble size (typically less than 100) is used. In this paper, we implement a radical filtering approach to a relatively low (40) dimensional toy model, the L-96 model (Lorenz 1996 Proc. on Predictability (ECMWF, 4-8 September 1995) pp 1-18) in various chaotic regimes in order to address the 'curse of ensemble size' for complex nonlinear systems. Practically, our approach has several desirable features such as extremely high computational efficiency, filter robustness towards variations of ensemble size (we found that the filter is reasonably stable even with a single realization) which makes it feasible for high dimensional problems, and it is independent of any tunable parameters such as the variance inflation coefficient in an ensemble Kalman filter. This radical filtering strategy decouples the problem of filtering a spatially extended nonlinear deterministic system to filtering a Fourier diagonal system of parametrized linear stochastic differential equations (Majda and Grote
A wavelet-linear genetic programming model for sodium (Na+) concentration forecasting in rivers
NASA Astrophysics Data System (ADS)
Ravansalar, Masoud; Rajaee, Taher; Zounemat-Kermani, Mohammad
2016-06-01
The prediction of water quality parameters in water resources such as rivers is of importance issue that needs to be considered in better management of irrigation systems and water supplies. In this respect, this study proposes a new hybrid wavelet-linear genetic programming (WLGP) model for prediction of monthly sodium (Na+) concentration. The 23-year monthly data used in this study, were measured from the Asi River at the Demirköprü gauging station located in Antakya, Turkey. At first, the measured discharge (Q) and Na+ datasets are initially decomposed into several sub-series using discrete wavelet transform (DWT). Then, these new sub-series are imposed to the ad hoc linear genetic programming (LGP) model as input patterns to predict monthly Na+ one month ahead. The results of the new proposed WLGP model are compared with LGP, WANN and ANN models. Comparison of the models represents the superiority of the WLGP model over the LGP, WANN and ANN models such that the Nash-Sutcliffe efficiencies (NSE) for WLGP, WANN, LGP and ANN models were 0.984, 0.904, 0.484 and 0.351, respectively. The achieved results even points to the superiority of the single LGP model than the ANN model. Continuously, the capability of the proposed WLGP model in terms of prediction of the Na+ peak values is also presented in this study.
Non-Linear Slosh Damping Model Development and Validation
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2015-01-01
Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can
Linear mixed effects models under inequality constraints with applications.
Farnan, Laura; Ivanova, Anastasia; Peddada, Shyamal D
2014-01-01
Constraints arise naturally in many scientific experiments/studies such as in, epidemiology, biology, toxicology, etc. and often researchers ignore such information when analyzing their data and use standard methods such as the analysis of variance (ANOVA). Such methods may not only result in a loss of power and efficiency in costs of experimentation but also may result poor interpretation of the data. In this paper we discuss constrained statistical inference in the context of linear mixed effects models that arise naturally in many applications, such as in repeated measurements designs, familial studies and others. We introduce a novel methodology that is broadly applicable for a variety of constraints on the parameters. Since in many applications sample sizes are small and/or the data are not necessarily normally distributed and furthermore error variances need not be homoscedastic (i.e. heterogeneity in the data) we use an empirical best linear unbiased predictor (EBLUP) type residual based bootstrap methodology for deriving critical values of the proposed test. Our simulation studies suggest that the proposed procedure maintains the desired nominal Type I error while competing well with other tests in terms of power. We illustrate the proposed methodology by re-analyzing a clinical trial data on blood mercury level. The methodology introduced in this paper can be easily extended to other settings such as nonlinear and generalized regression models.
Acoustic FMRI noise: linear time-invariant system model.
Rizzo Sierra, Carlos V; Versluis, Maarten J; Hoogduin, Johannes M; Duifhuis, Hendrikus Diek
2008-09-01
Functional magnetic resonance imaging (fMRI) enables sites of brain activation to be localized in human subjects. For auditory system studies, however, the acoustic noise generated by the scanner tends to interfere with the assessments of this activation. Understanding and modeling fMRI acoustic noise is a useful step to its reduction. To study acoustic noise, the MR scanner is modeled as a linear electroacoustical system generating sound pressure signals proportional to the time derivative of the input gradient currents. The transfer function of one MR scanner is determined for two different input specifications: 1) by using the gradient waveform calculated by the scanner software and 2) by using a recording of the gradient current. Up to 4 kHz, the first method is shown as reliable as the second one, and its use is encouraged when direct measurements of gradient currents are not possible. Additionally, the linear order and average damping properties of the gradient coil system are determined by impulse response analysis. Since fMRI is often based on echo planar imaging (EPI) sequences, a useful validation of the transfer function prediction ability can be obtained by calculating the acoustic output for the EPI sequence. We found a predicted sound pressure level (SPL) for the EPI sequence of 104 dB SPL compared to a measured value of 102 dB SPL. As yet, the predicted EPI pressure waveform shows similarity as well as some differences with the directly measured EPI pressure waveform.
Linear versus quadratic portfolio optimization model with transaction cost
NASA Astrophysics Data System (ADS)
Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah
2014-06-01
Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.
Some generalisations of linear-graph modelling for dynamic systems
NASA Astrophysics Data System (ADS)
de Silva, Clarence W.; Pourazadi, Shahram
2013-11-01
Proper modelling of a dynamic system can benefit analysis, simulation, design, evaluation and control of the system. The linear-graph (LG) approach is suitable for modelling lumped-parameter dynamic systems. By using the concepts of graph trees, it provides a graphical representation of the system, with a direct correspondence to the physical component topology. This paper systematically extends the application of LGs to multi-domain (mixed-domain or multi-physics) dynamic systems by presenting a unified way to represent different domains - mechanical, electrical, thermal and fluid. Preservation of the structural correspondence across domains is a particular advantage of LGs when modelling mixed-domain systems. The generalisation of Thevenin and Norton equivalent circuits to mixed-domain systems, using LGs, is presented. The structure of an LG model may follow a specific pattern. Vector LGs are introduced to take advantage of such patterns, giving a general LG representation for them. Through these vector LGs, the model representation becomes simpler and rather compact, both topologically and parametrically. A new single LG element is defined to facilitate the modelling of distributed-parameter (DP) systems. Examples are presented using multi-domain systems (a motion-control system and a flow-controlled pump), a multi-body mechanical system (robot manipulator) and DP systems (structural rods) to illustrate the application and advantages of the methodologies developed in the paper.
Linear mixing model applied to coarse resolution satellite data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1992-01-01
A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.
Probabilistic model of ligaments and tendons: Quasistatic linear stretching
NASA Astrophysics Data System (ADS)
Bontempi, M.
2009-03-01
Ligaments and tendons have a significant role in the musculoskeletal system and are frequently subjected to injury. This study presents a model of collagen fibers, based on the study of a statistical distribution of fibers when they are subjected to quasistatic linear stretching. With respect to other methodologies, this model is able to describe the behavior of the bundle using less ad hoc hypotheses and is able to describe all the quasistatic stretch-load responses of the bundle, including the yield and failure regions described in the literature. It has two other important results: the first is that it is able to correlate the mechanical behavior of the bundle with its internal structure, and it suggests a methodology to deduce the fibers population distribution directly from the tensile-test data. The second is that it can follow fibers’ structure evolution during the stretching and it is possible to study the internal adaptation of fibers in physiological and pathological conditions.
Pointwise Description for the Linearized Fokker-Planck-Boltzmann Model
NASA Astrophysics Data System (ADS)
Wu, Kung-Chien
2015-09-01
In this paper, we study the pointwise (in the space variable) behavior of the linearized Fokker-Planck-Boltzmann model for nonsmooth initial perturbations. The result reveals both the fluid and kinetic aspects of this model. The fluid-like waves are constructed as the long-wave expansion in the spectrum of the Fourier modes for the space variable, and it has polynomial time decay rate. We design a Picard-type iteration for constructing the increasingly regular kinetic-like waves, which are carried by the transport equations and have exponential time decay rate. The Mixture Lemma plays an important role in constructing the kinetic-like waves, this lemma was originally introduced by Liu-Yu (Commun Pure Appl Math 57:1543-1608, 2004) for Boltzmann equation, but the Fokker-Planck term in this paper creates some technical difficulties.
Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics
NASA Technical Reports Server (NTRS)
Wang, John T.
2010-01-01
The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.
Probabilistic model of ligaments and tendons: quasistatic linear stretching.
Bontempi, M
2009-03-01
Ligaments and tendons have a significant role in the musculoskeletal system and are frequently subjected to injury. This study presents a model of collagen fibers, based on the study of a statistical distribution of fibers when they are subjected to quasistatic linear stretching. With respect to other methodologies, this model is able to describe the behavior of the bundle using less ad hoc hypotheses and is able to describe all the quasistatic stretch-load responses of the bundle, including the yield and failure regions described in the literature. It has two other important results: the first is that it is able to correlate the mechanical behavior of the bundle with its internal structure, and it suggests a methodology to deduce the fibers population distribution directly from the tensile-test data. The second is that it can follow fibers' structure evolution during the stretching and it is possible to study the internal adaptation of fibers in physiological and pathological conditions.
NASA Astrophysics Data System (ADS)
Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying
2017-02-01
Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.
NASA Astrophysics Data System (ADS)
Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying
2016-11-01
Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.
NASA Astrophysics Data System (ADS)
Liang, Jianwu; Zhou, Jian; Shi, Jinjing; He, Guangqiang; Guo, Ying
2016-02-01
We characterize the efficiency of the practical continuous-variable quantum key distribution (CVQKD) while inserting the heralded noiseless linear amplifier (NLA) before detectors to increase the secret key rate and the maximum transmission distance in Gaussian channels. In the heralded NLA-based CVQKD system, the entanglement source is only placed in the middle while the two participants are unnecessary to trust their source. The intensities of source noise are sensitive to the tunable NLA with the parameter g in a suitable range and can be stabilized to the suitable constant values to eliminate the impact of channel noise and defeat the potential attacks. Simulation results show that there is a well balance between the secret key rate and the maximum transmission distance with the tunable NLA.
Robust cross-validation of linear regression QSAR models.
Konovalov, Dmitry A; Llewellyn, Lyndon E; Vander Heyden, Yvan; Coomans, Danny
2008-10-01
A quantitative structure-activity relationship (QSAR) model is typically developed to predict the biochemical activity of untested compounds from the compounds' molecular structures. "The gold standard" of model validation is the blindfold prediction when the model's predictive power is assessed from how well the model predicts the activity values of compounds that were not considered in any way during the model development/calibration. However, during the development of a QSAR model, it is necessary to obtain some indication of the model's predictive power. This is often done by some form of cross-validation (CV). In this study, the concepts of the predictive power and fitting ability of a multiple linear regression (MLR) QSAR model were examined in the CV context allowing for the presence of outliers. Commonly used predictive power and fitting ability statistics were assessed via Monte Carlo cross-validation when applied to percent human intestinal absorption, blood-brain partition coefficient, and toxicity values of saxitoxin QSAR data sets, as well as three known benchmark data sets with known outlier contamination. It was found that (1) a robust version of MLR should always be preferred over the ordinary-least-squares MLR, regardless of the degree of outlier contamination and that (2) the model's predictive power should only be assessed via robust statistics. The Matlab and java source code used in this study is freely available from the QSAR-BENCH section of www.dmitrykonovalov.org for academic use. The Web site also contains the java-based QSAR-BENCH program, which could be run online via java's Web Start technology (supporting Windows, Mac OSX, Linux/Unix) to reproduce most of the reported results or apply the reported procedures to other data sets.
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
Vazquez-Leal, H; Jimenez-Fernandez, V M; Benhammouda, B; Filobello-Nino, U; Sarmiento-Reyes, A; Ramirez-Pinero, A; Marin-Hernandez, A; Huerta-Chua, J
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation.
NASA Astrophysics Data System (ADS)
Orain, François; Bécoulet, M.; Morales, J.; Huijsmans, G. T. A.; Dif-Pradalier, G.; Hoelzl, M.; Garbet, X.; Pamela, S.; Nardon, E.; Passeron, C.; Latu, G.; Fil, A.; Cahyna, P.
2015-01-01
The dynamics of a multi-edge localized mode (ELM) cycle as well as the ELM mitigation by resonant magnetic perturbations (RMPs) are modeled in realistic tokamak X-point geometry with the non-linear reduced MHD code JOREK. The diamagnetic rotation is found to be a key parameter enabling us to reproduce the cyclical dynamics of the plasma relaxations and to model the near-symmetric ELM power deposition on the inner and outer divertor target plates consistently with experimental measurements. Moreover, the non-linear coupling of the RMPs with unstable modes are found to modify the edge magnetic topology and induce a continuous MHD activity in place of a large ELM crash, resulting in the mitigation of the ELMs. At larger diamagnetic rotation, a bifurcation from unmitigated ELMs—at low RMP current—towards fully suppressed ELMs—at large RMP current—is obtained.
Electroweak corrections and unitarity in linear moose models
Chivukula, R. Sekhar; Simmons, Elizabeth H.; He, H.-J.; Kurachi, Masafumi; Tanabashi, Masaharu
2005-02-01
We calculate the form of the corrections to the electroweak interactions in the class of Higgsless models which can be deconstructed to a chain of SU(2) gauge groups adjacent to a chain of U(1) gauge groups, and with the fermions coupled to any single SU(2) group and to any single U(1) group along the chain. The primary advantage of our technique is that the size of corrections to electroweak processes can be directly related to the spectrum of vector bosons ('KK modes'). In Higgsless models, this spectrum is constrained by unitarity. Our methods also allow for arbitrary background 5D geometry, spatially dependent gauge-couplings, and brane kinetic energy terms. We find that, due to the size of corrections to electroweak processes in any unitary theory, Higgsless models with localized fermions are disfavored by precision electroweak data. Although we stress our results as they apply to continuum Higgsless 5D models, they apply to any linear moose model including those with only a few extra vector bosons. Our calculations of electroweak corrections also apply directly to the electroweak gauge sector of 5D theories with a bulk scalar Higgs boson; the constraints arising from unitarity do not apply in this case.
Subthreshold linear modeling of dendritic trees: a computational approach.
Khodaei, Alireza; Pierobon, Massimiliano
2016-08-01
The design of communication systems based on the transmission of information through neurons is envisioned as a key technology for the pervasive interconnection of future wearable and implantable devices. While previous literature has mainly focused on modeling propagation of electrochemical spikes carrying natural information through the nervous system, in recent work the authors of this paper proposed the so-called subthreshold electrical stimulation as a viable technique to propagate artificial information through neurons. This technique promises to limit the interference with natural communication processes, and it can be successfully approximated with linear models. In this paper, a novel model is proposed to account for the subthreshold stimuli propagation from the dendritic tree to the soma of a neuron. A computational approach is detailed to obtain this model for a given realistic 3D dendritic tree with an arbitrary morphology. Numerical results from the model are obtained over a stimulation signal bandwidth of 1KHz, and compared with the results of a simulation through the NEURON software.
Linear-Nonlinear-Poisson Models of Primate Choice Dynamics
Corrado, Greg S; Sugrue, Leo P; Sebastian Seung, H; Newsome, William T
2005-01-01
The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys (Macacca mulatta) foraged for juice rewards by making eye movements to one of two colored icons presented on a computer monitor, each rewarded on dynamic variable-interval schedules. Using a generalization of Wiener kernel analysis, we recover a compact mechanistic description of the impact of past reward on future choice in the form of a Linear-Nonlinear-Poisson model. We validate this model through rigorous predictive and generative testing. Compared to our earlier work with this same data set, this model proves to be a better description of choice behavior and is more tightly correlated with putative neural value signals. Refinements over previous models include hyperbolic (as opposed to exponential) temporal discounting of past rewards, and differential (as opposed to fractional) comparisons of option value. Through numerical simulation we find that within this class of strategies, the model parameters employed by animals are very close to those that maximize reward harvesting efficiency. PMID:16596981
On the unnecessary ubiquity of hierarchical linear modeling.
McNeish, Daniel; Stapleton, Laura M; Silverman, Rebecca D
2017-03-01
In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Ravva, Patanjali; Karlsson, Mats O; French, Jonathan L
2014-04-30
The application of model-based meta-analysis in drug development has gained prominence recently, particularly for characterizing dose-response relationships and quantifying treatment effect sizes of competitor drugs. The models are typically nonlinear in nature and involve covariates to explain the heterogeneity in summary-level literature (or aggregate data (AD)). Inferring individual patient-level relationships from these nonlinear meta-analysis models leads to aggregation bias. Individual patient-level data (IPD) are indeed required to characterize patient-level relationships but too often this information is limited. Since combined analyses of AD and IPD allow advantage of the information they share to be taken, the models developed for AD must be derived from IPD models; in the case of linear models, the solution is a closed form, while for nonlinear models, closed form solutions do not exist. Here, we propose a linearization method based on a second order Taylor series approximation for fitting models to AD alone or combined AD and IPD. The application of this method is illustrated by an analysis of a continuous landmark endpoint, i.e., change from baseline in HbA1c at week 12, from 18 clinical trials evaluating the effects of DPP-4 inhibitors on hyperglycemia in diabetic patients. The performance of this method is demonstrated by a simulation study where the effects of varying the degree of nonlinearity and of heterogeneity in covariates (as assessed by the ratio of between-trial to within-trial variability) were studied. A dose-response relationship using an Emax model with linear and nonlinear effects of covariates on the emax parameter was used to simulate data. The simulation results showed that when an IPD model is simply used for modeling AD, the bias in the emax parameter estimate increased noticeably with an increasing degree of nonlinearity in the model, with respect to covariates. When using an appropriately derived AD model, the linearization
Genetic demixing and evolution in linear stepping stone models
Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.
2010-01-01
Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q-allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial
Genetic demixing and evolution in linear stepping stone models
NASA Astrophysics Data System (ADS)
Korolev, K. S.; Avlund, Mikkel; Hallatschek, Oskar; Nelson, David R.
2010-04-01
Results for mutation, selection, genetic drift, and migration in a one-dimensional continuous population are reviewed and extended. The population is described by a continuous limit of the stepping stone model, which leads to the stochastic Fisher-Kolmogorov-Petrovsky-Piscounov equation with additional terms describing mutations. Although the stepping stone model was first proposed for population genetics, it is closely related to “voter models” of interest in nonequilibrium statistical mechanics. The stepping stone model can also be regarded as an approximation to the dynamics of a thin layer of actively growing pioneers at the frontier of a colony of micro-organisms undergoing a range expansion on a Petri dish. The population tends to segregate into monoallelic domains. This segregation slows down genetic drift and selection because these two evolutionary forces can only act at the boundaries between the domains; the effects of mutation, however, are not significantly affected by the segregation. Although fixation in the neutral well-mixed (or “zero-dimensional”) model occurs exponentially in time, it occurs only algebraically fast in the one-dimensional model. An unusual sublinear increase is also found in the variance of the spatially averaged allele frequency with time. If selection is weak, selective sweeps occur exponentially fast in both well-mixed and one-dimensional populations, but the time constants are different. The relatively unexplored problem of evolutionary dynamics at the edge of an expanding circular colony is studied as well. Also reviewed are how the observed patterns of genetic diversity can be used for statistical inference and the differences are highlighted between the well-mixed and one-dimensional models. Although the focus is on two alleles or variants, q -allele Potts-like models of gene segregation are considered as well. Most of the analytical results are checked with simulations and could be tested against recent spatial
NASA Technical Reports Server (NTRS)
Golden, R. L.; Badhwar, G. D.; Stephens, S. A.
1975-01-01
The continuity equation for cosmic ray propagation is used to derive a set of linear equations interrelating the fluxes of multiply charged nuclei as observed at any particular part of the galaxy. The derivation leads to model independent definitions for cosmic ray storage time, mean density of target nuclei and effective mass traversed. The set of equations form a common framework for comparisons of theories and observations. As an illustration, it is shown that there exists a large class of propagation models which give the same result as the exponential path length model. The formalism is shown to accommodate dynamic as well as equilibrium models of production and propagation.
NASA Technical Reports Server (NTRS)
Golden, R. L.; Badhwar, G. D.; Stephens, S. A.
1975-01-01
The continuity equation for cosmic ray propagation is used to derive a set of linear equations interrelating the fluxes of multiply charged nuclei as observed at any particular part of the galaxy. The derivation leads to model independent definitions for cosmic ray storage time, mean density of target nuclei and effective mass traversed. The set of equations form a common framework for comparisons of theories and observations. As an illustration, it is shown that there exists a large class of propagation models which give the same result as the exponential path length model. The formalism is shown to accommodate dynamic as well as equilibrium models of production and propagation.
The linear reservoir model: conceptual or physically based?
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Lawrence, Deborah
2017-04-01
From a gridded catchment (25 x 25 m), we have investigated the distribution of distances from grid points to the nearest river reach. Based on 130 Norwegian catchments, we find that an exponential distribution fits the empirical distance distributions very well. Such a distribution is very informative regarding how the catchment area is organised with respect to the river network and can be used to easily determine the catchment fractional area as a function of distance from the river network. This is important for runoff dynamics since the travel times of water in the soils is slower than that in the river network by several orders of magnitude. If we consider the fractional areas for each distance interval, the properties of the exponential distance distribution dictate that the ratio between consecutive fractional areas is a constant, κ . Furthermore, if we assume that after a precipitation event, water is propagated through the soils to the river network with a constant celerity/velocity, the ratio between volumes of water drained into the river network at each time step is a constant and equal to κ. A linear reservoir has the same property of consecutive runoff volumes having a constant ratio and if the velocity/celerity is such that the distance interval between the consecutive areas is the distance travelled by water for each time step, Δt, then the rate constant, θ, of the linear reservoir is a straightforward function of the constant κ, θ=(1-κ)/Δt . The fact that exponential distance distributions are found for so many (actually all we have investigated) Norwegian catchments suggests that rainfall-runoff models based on linear reservoirs can no longer be dismissed as purely conceptual, as they clearly reflect the physical dynamics of the runoff generation processes at the catchment scale.
Model for continuous thermal metal to insulator transition
NASA Astrophysics Data System (ADS)
Jian, Chao-Ming; Bi, Zhen; Xu, Cenke
2017-09-01
We propose a d -dimensional interacting Majorana fermion model with quenched disorder, which gives us a continuous quantum phase transition between a diffusive thermal metal phase with a finite entropy density to an insulator phase with zero entropy density. This model is based on coupled Sachdev-Ye-Kitaev model clusters, and hence has a controlled large-N limit. The metal-insulator transition is accompanied by a spontaneous time-reversal symmetry breaking. We perform controlled calculations to show that the energy diffusion constant jumps to zero discontinuously at the metal-insulator transition, while the time-reversal symmetry-breaking order parameter increases continuously.
A Quasispecies Continuous Contact Model in a Critical Regime
NASA Astrophysics Data System (ADS)
Kondratiev, Yuri; Pirogov, Sergey; Zhizhina, Elena
2016-04-01
We study a new non-equilibrium dynamical model: a marked continuous contact model in d-dimensional space (d ge 3). We prove that for certain values of rates (the critical regime) this system has the one-parameter family of invariant measures labelled by the spatial density of particles. Then we prove that the process starting from the marked Poisson measure converges to one of these invariant measures. In contrast with the continuous contact model studied earlier in Kondratiev (Infin Dimens Anal Quantum Probab Relat Top 11(2):231-258, 2008), now the spatial particle density is not a conserved quantity.
Revisiting "Discrepancy Analysis in Continuing Medical Education: A Conceptual Model"
ERIC Educational Resources Information Center
Fox, Robert D.
2011-01-01
Based upon a review and analysis of selected literature, the author presents a conceptual model of discrepancy analysis evaluation for planning, implementing, and assessing the impact of continuing medical education (CME). The model is described in terms of its value as a means of diagnosing errors in the development and implementation of CME. The…
A MCMC-Method for Models with Continuous Latent Responses.
ERIC Educational Resources Information Center
Maris, Gunter; Maris, Eric
2002-01-01
Introduces a new technique for estimating the parameters of models with continuous latent data. To streamline presentation of this Markov Chain Monte Carlo (MCMC) method, the Rasch model is used. Also introduces a new sampling-based Bayesian technique, the DA-T-Gibbs sampler. (SLD)
Modeling of water treatment plant using timed continuous Petri nets
NASA Astrophysics Data System (ADS)
Nurul Fuady Adhalia, H.; Subiono, Adzkiya, Dieky
2017-08-01
Petri nets represent graphically certain conditions and rules. In this paper, we construct a model of the Water Treatment Plant (WTP) using timed continuous Petri nets. Specifically, we consider that (1) the water pump always active and (2) the water source is always available. After obtaining the model, the flow through the transitions and token conservation laws are calculated.
Revisiting "Discrepancy Analysis in Continuing Medical Education: A Conceptual Model"
ERIC Educational Resources Information Center
Fox, Robert D.
2011-01-01
Based upon a review and analysis of selected literature, the author presents a conceptual model of discrepancy analysis evaluation for planning, implementing, and assessing the impact of continuing medical education (CME). The model is described in terms of its value as a means of diagnosing errors in the development and implementation of CME. The…
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods.
Regression models for mixed Poisson and continuous longitudinal data.
Yang, Ying; Kang, Jian; Mao, Kai; Zhang, Jie
2007-09-10
In this article we develop flexible regression models in two respects to evaluate the influence of the covariate variables on the mixed Poisson and continuous responses and to evaluate how the correlation between Poisson response and continuous response changes over time. A scenario for dealing with regression models of mixed continuous and Poisson responses when the heterogeneous variance and correlation changing over time exist is proposed. Our general approach is first to jointly build marginal model and to check whether the variance and correlation change over time via likelihood ratio test. If the variance and correlation change over time, we will do a suitable data transformation to properly evaluate the influence of the covariates on the mixed responses. The proposed methods are applied to the interstitial cystitis data base (ICDB) cohort study, and we find that the positive correlations significantly change over time, which suggests heterogeneous variances should not be ignored in modelling and inference.
Mbougua, Jules Brice Tchatchueng; Laurent, Christian; Ndoye, Ibra; Delaporte, Eric; Gwet, Henri; Molinari, Nicolas
2013-11-20
Multiple imputation is commonly used to impute missing covariate in Cox semiparametric regression setting. It is to fill each missing data with more plausible values, via a Gibbs sampling procedure, specifying an imputation model for each missing variable. This imputation method is implemented in several softwares that offer imputation models steered by the shape of the variable to be imputed, but all these imputation models make an assumption of linearity on covariates effect. However, this assumption is not often verified in practice as the covariates can have a nonlinear effect. Such a linear assumption can lead to a misleading conclusion because imputation model should be constructed to reflect the true distributional relationship between the missing values and the observed values. To estimate nonlinear effects of continuous time invariant covariates in imputation model, we propose a method based on B-splines function. To assess the performance of this method, we conducted a simulation study, where we compared the multiple imputation method using Bayesian splines imputation model with multiple imputation using Bayesian linear imputation model in survival analysis setting. We evaluated the proposed method on the motivated data set collected in HIV-infected patients enrolled in an observational cohort study in Senegal, which contains several incomplete variables. We found that our method performs well to estimate hazard ratio compared with the linear imputation methods, when data are missing completely at random, or missing at random. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Simpson, D. J. W.
2017-01-01
The mode-locking regions of a dynamical system are subsets of parameter space within which there exists an attracting periodic solution. For piecewise-linear continuous maps, these regions have a distinctive chain structure with points of zero width called shrinking points. In this paper a local analysis about an arbitrary shrinking point is performed. This is achieved by studying the symbolic itineraries of periodic solutions in nearby mode-locking regions and performing an asymptotic analysis on one-dimensional centre manifolds in order to build a comprehensive theoretical framework for the local dynamics. The main results are universal quantitative descriptions for the shape of nearby mode-locking regions, the location of nearby shrinking points, and the key properties of these shrinking points. The results are applied to the three-dimensional border-collision normal form, a model of an oscillator subject to dry friction, and a model of a DC/DC power converter.
Linear regression models for solvent accessibility prediction in proteins.
Wagner, Michael; Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław
2005-04-01
The relative solvent accessibility (RSA) of an amino acid residue in a protein structure is a real number that represents the solvent exposed surface area of this residue in relative terms. The problem of predicting the RSA from the primary amino acid sequence can therefore be cast as a regression problem. Nevertheless, RSA prediction has so far typically been cast as a classification problem. Consequently, various machine learning techniques have been used within the classification framework to predict whether a given amino acid exceeds some (arbitrary) RSA threshold and would thus be predicted to be "exposed," as opposed to "buried." We have recently developed novel methods for RSA prediction using nonlinear regression techniques which provide accurate estimates of the real-valued RSA and outperform classification-based approaches with respect to commonly used two-class projections. However, while their performance seems to provide a significant improvement over previously published approaches, these Neural Network (NN) based methods are computationally expensive to train and involve several thousand parameters. In this work, we develop alternative regression models for RSA prediction which are computationally much less expensive, involve orders-of-magnitude fewer parameters, and are still competitive in terms of prediction quality. In particular, we investigate several regression models for RSA prediction using linear L1-support vector regression (SVR) approaches as well as standard linear least squares (LS) regression. Using rigorously derived validation sets of protein structures and extensive cross-validation analysis, we compare the performance of the SVR with that of LS regression and NN-based methods. In particular, we show that the flexibility of the SVR (as encoded by metaparameters such as the error insensitivity and the error penalization terms) can be very beneficial to optimize the prediction accuracy for buried residues. We conclude that the simple
Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances
ERIC Educational Resources Information Center
Halpin, Peter F.; Maraun, Michael D.
2010-01-01
A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…
Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances
ERIC Educational Resources Information Center
Halpin, Peter F.; Maraun, Michael D.
2010-01-01
A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…
Direction of Effects in Multiple Linear Regression Models.
Wiedermann, Wolfgang; von Eye, Alexander
2015-01-01
Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.
Feedbacks, climate sensitivity, and the limits of linear models
NASA Astrophysics Data System (ADS)
Rugenstein, M.; Knutti, R.
2015-12-01
The term "feedback" is used ubiquitously in climate research, but implies varied meanings in different contexts. From a specific process that locally affects a quantity, to a formal framework that attempts to determine a global response to a forcing, researchers use this term to separate, simplify, and quantify parts of the complex Earth system. We combine large (>120 member) ensemble GCM and EMIC step forcing simulations over a broad range of forcing levels with a historical and educational perspective to organize existing ideas around feedbacks and linear forcing-feedback models. With a new method overcoming internal variability and initial condition problems we quantify the non-constancy of the climate feedback parameter. Our results suggest a strong state- and forcing-dependency of feedbacks, which is not considered appropriately in many studies. A non-constant feedback factor likely explains some of the differences in estimates of equilibrium climate sensitivity from different methods and types of data. We discuss implications for the definition of the forcing term and its various adjustments. Clarifying the value and applicability of the linear forcing feedback framework and a better quantification of feedbacks on various timescales and spatial scales remains a high priority in order to better understand past and predict future changes in the climate system.
Categorical and Continuous Models of Liability to Externalizing Disorders
Markon, Kristian E.; Krueger, Robert F.
2008-01-01
Context Patterns of genetic, environmental, and phenotypic relationships among antisocial behavior and substance use disorders indicate the presence of a common externalizing liability. However, whether this liability is relatively continuous and graded, or categorical and class-like, has not been well established. Objectives To compare the fit of categorical and continuous models of externalizing liability in a large, nationally representative sample. Design Categorical and continuous models of externalizing liability were compared using interview data from the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). Setting Face-to-face interviews conducted in the United States. Participants Random sample of 43 093 noninstitutionalized adult civilians living in the United States. Main Outcome Measures Lifetime and current (past 12 months) diagnoses of antisocial personality disorder, nicotine dependence, alcohol dependence, marijuana dependence, cocaine dependence, and other substance dependence. Results In the entire sample, as well as for males and females separately, using either lifetime or current diagnoses, the best-fitting model of externalizing liability was a continuous normal model. Moreover, there was a general trend toward latent trait models fitting better than latent class models, indicating that externalizing liability was continuous and graded, rather than categorical and class-like. Conclusions Liability to externalizing spectrum disorders is graded and continuous normal in distribution. Research regarding etiology, assessment, and treatment of externalizing disorders should target externalizing liability over a range of severity. Current diagnoses represent extremes of this continuous liability distribution, indicating that conditions currently classified as subthreshold are likely to provide important information regarding liability to externalizing phenomena. PMID:16330723
Modeling Continuous Admixture Using Admixture-Induced Linkage Disequilibrium.
Zhou, Ying; Qiu, Hongxiang; Xu, Shuhua
2017-02-23
Recent migrations and inter-ethnic mating of long isolated populations have resulted in genetically admixed populations. To understand the complex population admixture process, which is critical to both evolutionary and medical studies, here we used admixture induced linkage disequilibrium (LD) to infer continuous admixture events, which is common for most existing admixed populations. Unlike previous studies, we expanded the typical continuous admixture model to a more general scenario with isolation after a certain duration of continuous gene flow. Based on the new models, we developed a method, CAMer, to infer the admixture history considering continuous and complex demographic process of gene flow between populations. We evaluated the performance of CAMer by computer simulation and further applied our method to real data analysis of a few well-known admixed populations.
Modeling Continuous Admixture Using Admixture-Induced Linkage Disequilibrium
Zhou, Ying; Qiu, Hongxiang; Xu, Shuhua
2017-01-01
Recent migrations and inter-ethnic mating of long isolated populations have resulted in genetically admixed populations. To understand the complex population admixture process, which is critical to both evolutionary and medical studies, here we used admixture induced linkage disequilibrium (LD) to infer continuous admixture events, which is common for most existing admixed populations. Unlike previous studies, we expanded the typical continuous admixture model to a more general scenario with isolation after a certain duration of continuous gene flow. Based on the new models, we developed a method, CAMer, to infer the admixture history considering continuous and complex demographic process of gene flow between populations. We evaluated the performance of CAMer by computer simulation and further applied our method to real data analysis of a few well-known admixed populations. PMID:28230170
Forecasting Groundwater Temperature with Linear Regression Models Using Historical Data.
Figura, Simon; Livingstone, David M; Kipfer, Rolf
2015-01-01
Although temperature is an important determinant of many biogeochemical processes in groundwater, very few studies have attempted to forecast the response of groundwater temperature to future climate warming. Using a composite linear regression model based on the lagged relationship between historical groundwater and regional air temperature data, empirical forecasts were made of groundwater temperature in several aquifers in Switzerland up to the end of the current century. The model was fed with regional air temperature projections calculated for greenhouse-gas emissions scenarios A2, A1B, and RCP3PD. Model evaluation revealed that the approach taken is adequate only when the data used to calibrate the models are sufficiently long and contain sufficient variability. These conditions were satisfied for three aquifers, all fed by riverbank infiltration. The forecasts suggest that with respect to the reference period 1980 to 2009, groundwater temperature in these aquifers will most likely increase by 1.1 to 3.8 K by the end of the current century, depending on the greenhouse-gas emissions scenario employed.
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
Linear model for fast background subtraction in oligonucleotide microarrays
2009-01-01
Background One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. Results We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. Conclusion The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry. PMID:19917117
Gauged linear sigma model and pion-pion scattering
Fariborz, Amir H.; Schechter, Joseph; Shahid, M. Naeem
2009-12-01
A simple gauged linear sigma model with several parameters to take the symmetry breaking and the mass differences between the vector meson and the axial vector meson into account is considered here as a possibly useful 'template' for the role of a light scalar in QCD as well as for (at a different scale) an effective Higgs sector for some recently proposed walking technicolor models. An analytic procedure is first developed for relating the Lagrangian parameters to four well established (in the QCD application) experimental inputs. One simple equation distinguishes three different cases: i. QCD with axial vector particle heavier than vector particle, ii. possible technicolor model with vector particle heavier than the axial vector one, iii. the unphysical QCD case where both the Kawarabayashi-Suzuki-Riazuddin-Fayazuddin and Weinberg relations hold. The model is applied to the s-wave pion-pion scattering in QCD. Both the near threshold region and (with an assumed unitarization) the 'global' region up to about 800 MeV are considered. It is noted that there is a little tension between the choice of 'bare' sigma mass parameter for describing these two regions. If a reasonable 'global' fit is made, there is some loss of precision in the near threshold region.
A linear geospatial streamflow modeling system for data sparse environments
Asante, Kwabena O.; Arlan, Guleid A.; Pervez, Md Shahriar; Rowland, James
2008-01-01
In many river basins around the world, inaccessibility of flow data is a major obstacle to water resource studies and operational monitoring. This paper describes a geospatial streamflow modeling system which is parameterized with global terrain, soils and land cover data and run operationally with satellite‐derived precipitation and evapotranspiration datasets. Simple linear methods transfer water through the subsurface, overland and river flow phases, and the resulting flows are expressed in terms of standard deviations from mean annual flow. In sample applications, the modeling system was used to simulate flow variations in the Congo, Niger, Nile, Zambezi, Orange and Lake Chad basins between 1998 and 2005, and the resulting flows were compared with mean monthly values from the open‐access Global River Discharge Database. While the uncalibrated model cannot predict the absolute magnitude of flow, it can quantify flow anomalies in terms of relative departures from mean flow. Most of the severe flood events identified in the flow anomalies were independently verified by the Dartmouth Flood Observatory (DFO) and the Emergency Disaster Database (EM‐DAT). Despite its limitations, the modeling system is valuable for rapid characterization of the relative magnitude of flood hazards and seasonal flow changes in data sparse settings.
[Regression models for variables expressed as a continuous proportion].
Salinas-Rodríguez, Aarón; Pérez-Núñez, Ricardo; Avila-Burgos, Leticia
2006-01-01
To describe some of the statistical alternatives available for studying continuous proportions and to compare them in order to show their advantages and disadvantages by means of their application in a practical example of the Public Health field. From the National Reproductive Health Survey performed in 2003, the proportion of individual coverage in the family planning program--proposed in one study carried out in the National Institute of Public Health in Cuernavaca, Morelos, Mexico (2005)--was modeled using the Normal, Gamma, Beta and quasi-likelihood regression models. The Akaike Information Criterion (AIC) proposed by McQuarrie and Tsai was used to define the best model.Then, using a simulation (Monte Carlo/Markov Chains approach) a variable with a Beta distribution was generated to evaluate the behavior of the 4 models while varying the sample size from 100 to 18,000 observations. Results showed that the best statistical option for the analysis of continuous proportions was the Beta regression model, since its assumptions are easily accomplished and because it had the lowest AIC value. Simulation evidenced that while the sample size increases the Gamma, and even more so the quasi-likelihood, models come significantly close to the Beta regression model. The use of parametric Beta regression is highly recommended to model continuous proportions and the normal model should be avoided. If the sample size is large enough,the use of quasi-likelihood model represents a good alternative.
Identifying multiple change points in a linear mixed effects model.
Lai, Yinglei; Albert, Paul S
2014-03-15
Although change-point analysis methods for longitudinal data have been developed, it is often of interest to detect multiple change points in longitudinal data. In this paper, we propose a linear mixed effects modeling framework for identifying multiple change points in longitudinal Gaussian data. Specifically, we develop a novel statistical and computational framework that integrates the expectation-maximization and the dynamic programming algorithms. We conduct a comprehensive simulation study to demonstrate the performance of our method. We illustrate our method with an analysis of data from a trial evaluating a behavioral intervention for the control of type I diabetes in adolescents with HbA1c as the longitudinal response variable. Copyright © 2013 John Wiley & Sons, Ltd.
Optimization in generalized linear models: A case study
NASA Astrophysics Data System (ADS)
Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina
2016-06-01
The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.
A Second-Order Conditionally Linear Mixed Effects Model With Observed and Latent Variable Covariates
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a nonlinear manner are common to all subjects. In this article we describe how a variant of the Michaelis–Menten (M–M) function can be fit within this modeling framework using Mplus 6.0. We demonstrate how observed and latent covariates can be incorporated to help explain individual differences in growth characteristics. Features of the model including an explication of key analytic decision points are illustrated using longitudinal reading data. To aid in making this class of models accessible, annotated Mplus code is provided. PMID:22915834
A Linear City Model with Asymmetric Consumer Distribution
Azar, Ofer H.
2015-01-01
The article analyzes a linear-city model where the consumer distribution can be asymmetric, which is important because in real markets this distribution is often asymmetric. The model yields equilibrium price differences, even though the firms’ costs are equal and their locations are symmetric (at the two endpoints of the city). The equilibrium price difference is proportional to the transportation cost parameter and does not depend on the good's cost. The firms' markups are also proportional to the transportation cost. The two firms’ prices will be equal in equilibrium if and only if half of the consumers are located to the left of the city’s midpoint, even if other characteristics of the consumer distribution are highly asymmetric. An extension analyzes what happens when the firms have different costs and how the two sources of asymmetry – the consumer distribution and the cost per unit – interact together. The model can be useful as a tool for further development by other researchers interested in applying this simple yet flexible framework for the analysis of various topics. PMID:26034984
Preconditioning the bidomain model with almost linear complexity
NASA Astrophysics Data System (ADS)
Pierre, Charles
2012-01-01
The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.
A linear city model with asymmetric consumer distribution.
Azar, Ofer H
2015-01-01
The article analyzes a linear-city model where the consumer distribution can be asymmetric, which is important because in real markets this distribution is often asymmetric. The model yields equilibrium price differences, even though the firms' costs are equal and their locations are symmetric (at the two endpoints of the city). The equilibrium price difference is proportional to the transportation cost parameter and does not depend on the good's cost. The firms' markups are also proportional to the transportation cost. The two firms' prices will be equal in equilibrium if and only if half of the consumers are located to the left of the city's midpoint, even if other characteristics of the consumer distribution are highly asymmetric. An extension analyzes what happens when the firms have different costs and how the two sources of asymmetry - the consumer distribution and the cost per unit - interact together. The model can be useful as a tool for further development by other researchers interested in applying this simple yet flexible framework for the analysis of various topics.
Simulating annual glacier flow with a linear reservoir model
NASA Astrophysics Data System (ADS)
Span, Norbert; Kuhn, Michael
2003-05-01
In this paper we present a numerical simulation of the observation that most alpine glaciers have reached peak velocities in the early 1980s followed by nearly exponential decay of velocity in the subsequent decade. We propose that similarity exists between precipitation and associated runoff hydrograph in a river basin on one side and annual mean specific mass balance of the accumulation area of alpine glaciers and ensuing changes in ice flow on the other side. The similarity is expressed in terms of a linear reservoir with fluctuating input where the year to year change of ice velocity is governed by two terms, a fraction of the velocity of the previous year as a recession term and the mean specific balance of the accumulation area of the current year as a driving term. The coefficients of these terms directly relate to the timescale, the mass balance/altitude profile, and the geometric scale of the glacier. The model is well supported by observations in the upper part of the glacier where surface elevation stays constant to within ±5 m over a 30 year period. There is no temporal trend in the agreement between observed and modeled horizontal velocities and no difference between phases of acceleration and phases of deceleration, which means that the model is generally valid for a given altitude on a given glacier.
Comparison of Linear and Non-Linear Regression Models to Estimate Leaf Area Index of Dryland Shrubs.
NASA Astrophysics Data System (ADS)
Dashti, H.; Glenn, N. F.; Ilangakoon, N. T.; Mitchell, J.; Dhakal, S.; Spaete, L.
2015-12-01
Leaf area index (LAI) is a key parameter in global ecosystem studies. LAI is considered a forcing variable in land surface processing models since ecosystem dynamics are highly correlated to LAI. In response to environmental limitations, plants in semiarid ecosystems have smaller leaf area, making accurate estimation of LAI by remote sensing a challenging issue. Optical remote sensing (400-2500 nm) techniques to estimate LAI are based either on radiative transfer models (RTMs) or statistical approaches. Considering the complex radiation field of dry ecosystems, simple 1-D RTMs lead to poor results, and on the other hand, inversion of more complex 3-D RTMs is a demanding task which requires the specification of many variables. A good alternative to physical approaches is using methods based on statistics. Similar to many natural phenomena, there is a non-linear relationship between LAI and top of canopy electromagnetic waves reflected to optical sensors. Non-linear regression models can better capture this relationship. However, considering the problem of a few numbers of observations in comparison to the feature space (n
models will not necessarily outperform the more simple linear models. In this study linear versus non-linear regression techniques were investigated to estimate LAI. Our study area is located in southwestern Idaho, Great Basin. Sagebrush (Artemisia tridentata spp) serves a critical role in maintaining the structure of this ecosystem. Using a leaf area meter (Accupar LP-80), LAI values were measured in the field. Linear Partial Least Square regression and non-linear, tree based Random Forest regression have been implemented to estimate the LAI of sagebrush from hyperspectral data (AVIRIS-ng) collected in late summer 2014. Cross validation of results indicate that PLS can provide comparable results to Random Forest.
Ladefoged, Claes N; Benoit, Didier; Law, Ian; Holm, Søren; Kjær, Andreas; Højgaard, Liselotte; Hansen, Adam E; Andersen, Flemming L
2015-10-21
The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images, and to assign the correct linear attenuation coefficient to bone. The ultra-short echo time (UTE) MR sequence was proposed as a basis for MR-AC as this sequence shows a small signal in bone. The purpose of this study was to develop a new clinically feasible MR-AC method with patient specific continuous-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [(18)F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET images were evaluated in the whole brain, as well as regionally in the brain using a ROI-based analysis. Our method segments air, brain, cerebral spinal fluid, and soft tissue voxels on the unprocessed UTE TE images, and uses a mapping of R(*)2 values to CT Hounsfield Units (HU) to measure the density in bone voxels. The average error of our method in the brain was 0.1% and less than 1.2% in any region of the brain. On average 95% of the brain was within ±10% of PETCT, compared to 72% when using UTE. The proposed method is clinically feasible, reducing both the global and local errors on the reconstructed PET images, as well as limiting the number and extent of the outliers.
NASA Astrophysics Data System (ADS)
Ladefoged, Claes N.; Benoit, Didier; Law, Ian; Holm, Søren; Kjær, Andreas; Højgaard, Liselotte; Hansen, Adam E.; Andersen, Flemming L.
2015-10-01
The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images, and to assign the correct linear attenuation coefficient to bone. The ultra-short echo time (UTE) MR sequence was proposed as a basis for MR-AC as this sequence shows a small signal in bone. The purpose of this study was to develop a new clinically feasible MR-AC method with patient specific continuous-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [18F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET images were evaluated in the whole brain, as well as regionally in the brain using a ROI-based analysis. Our method segments air, brain, cerebral spinal fluid, and soft tissue voxels on the unprocessed UTE TE images, and uses a mapping of R2* values to CT Hounsfield Units (HU) to measure the density in bone voxels. The average error of our method in the brain was 0.1% and less than 1.2% in any region of the brain. On average 95% of the brain was within ±10% of PETCT, compared to 72% when using UTE. The proposed method is clinically feasible, reducing both the global and local errors on the reconstructed PET images, as well as limiting the number and extent of the outliers.
Lee, Deukhwan; Misztal, Ignacy; Bertrand, J Keith; Rekaya, Romdhane
2002-01-01
Data included 393,097 calving ease, 129,520 gestation length, and 412,484 birth weight records on 412,484 Gelbvieh cattle. Additionally, pedigrees were available on 72,123 animals. Included in the models were effects of sex and age of dam, treated as fixed, as well as direct, maternal genetic and permanent environmental effects and effects of contemporary group (herd-year-season), treated as random. In all analyses, birth weight and gestation length were treated as continuous traits. Calving ease (CE) was treated either as a continuous trait in a mixed linear model (LM), or as a categorical trait in linear-threshold models (LTM). Solutions in TM obtained by empirical Bayes (TMEB) and Monte Carlo (TMMC) methodologies were compared with those by LM. Due to the computational cost, only 10,000 samples were obtained for TMMC. For calving ease, correlations between LM and TMEB were 0.86 and 0.78 for direct and maternal genetic effects, respectively. The same correlations but between TMEB and TMMC were 1.00 and 0.98, respectively. The correlations between LM and TMMC were 0.85 and 0.75, respectively. The correlations for the linear traits were above.97 between LM and TMEB but as low as 0.91 between LM and TMMC, suggesting insufficient convergence of TMMC. Computing time required was about 2 hrs, 5 hrs, and 6 days for LM, TMEB and TMMC, respectively, and memory requirements were 169, 171, and 445 megabytes, respectively. Bayesian implementation of threshold model is simple, can be extended to multiple categorical traits, and allows easy calculation of accuracies; however, computing time is prohibitively long for large models.
Carroll, P V; Drake, W M; Maher, K T; Metcalfe, K; Shaw, N J; Dunger, D B; Cheetham, T D; Camacho-Hübner, C; Savage, M O; Monson, J P
2004-08-01
Although GH replacement improves the features of GH deficiency (GHD) in adults, it has yet to be established whether cessation of GH at completion of childhood growth results in adverse consequences for the adolescent with GHD. Effects of continuation or cessation of GH on body composition, insulin sensitivity, and lipid levels were studied in 24 adolescents (13 males, 11 females, aged 17.0 +/- 0.3, yr, mean +/- se, puberty stage 4 or 5) in whom height velocity was less than 2 cm/yr. Provocative testing confirmed severe GHD [peak GH < 9 mU/liter (3 microg/liter)] in all cases and was followed by a lead-in period of 3 months during which the pediatric dose of GH continued unchanged. Baseline investigations were then performed using dual-energy x-ray absorptiometry (body composition), lipid measurements, and assessment of insulin sensitivity by both homeostasis model assessment and a short insulin tolerance test. Twelve patients remained on GH (0.35 U/kg.wk), and 12 patients ceased GH treatment. The groups were followed up in parallel with repeat observations made after 6 and 12 months. No endocrine differences were evident between the groups at baseline. GH cessation resulted in a reduction of serum IGF-I Z score [-1.62 +/- 0.29, baseline vs. -2.52 +/- 0.12, 6 months (P < 0.05) vs. -2.52 +/- 0.10, 12 months (P < 0.01)] but values remained unchanged in those continuing GH replacement. Lean body mass increased by 2.5 +/- 0.5 kg ( approximately 6%) over 12 months in those receiving GH but was unchanged after GH discontinuation. Cessation of GH resulted in increased insulin sensitivity [short insulin tolerance test, 153 +/- 22 micromol/liter.min, baseline vs. 187 +/- 20, 6 months (P < 0.05) vs. 204 +/- 14, 12 months (P = 0.05)], but no significant change was seen during 12 months of GH continuation. Lipid levels remained unaltered in both groups. Continuation of GH at completion of linear growth resulted in ongoing accrual of lean body mass (LBM), whereas skeletal
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
On the Relation between the Linear Factor Model and the Latent Profile Model
ERIC Educational Resources Information Center
Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul
2011-01-01
The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…
On the Relation between the Linear Factor Model and the Latent Profile Model
ERIC Educational Resources Information Center
Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul
2011-01-01
The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…
Performance Models for the Spike Banded Linear System Solver
Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...
2011-01-01
With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated
Kohli, Nidhi; Hughes, John; Wang, Chun; Zopluoglu, Cengiz; Davison, Mark L
2015-06-01
A linear-linear piecewise growth mixture model (PGMM) is appropriate for analyzing segmented (disjointed) change in individual behavior over time, where the data come from a mixture of 2 or more latent classes, and the underlying growth trajectories in the different segments of the developmental process within each latent class are linear. A PGMM allows the knot (change point), the time of transition from 1 phase (segment) to another, to be estimated (when it is not known a priori) along with the other model parameters. To assist researchers in deciding which estimation method is most advantageous for analyzing this kind of mixture data, the current research compares 2 popular approaches to inference for PGMMs: maximum likelihood (ML) via an expectation-maximization (EM) algorithm, and Markov chain Monte Carlo (MCMC) for Bayesian inference. Monte Carlo simulations were carried out to investigate and compare the ability of the 2 approaches to recover the true parameters in linear-linear PGMMs with unknown knots. The results show that MCMC for Bayesian inference outperformed ML via EM in nearly every simulation scenario. Real data examples are also presented, and the corresponding computer codes for model fitting are provided in the Appendix to aid practitioners who wish to apply this class of models.
Fourth standard model family neutrino at future linear colliders
Ciftci, A.K.; Ciftci, R.; Sultansoy, S.
2005-09-01
It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac ({nu}{sub 4}) and Majorana (N{sub 1}) neutrinos at future linear colliders with {radical}(s)=500 GeV, 1 TeV, and 3 TeV are considered. The cross section for the process e{sup +}e{sup -}{yields}{nu}{sub 4}{nu}{sub 4}(N{sub 1}N{sub 1}) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels ({nu}{sub 4}(N{sub 1}){yields}{mu}{sup {+-}}W{sup {+-}}) provide cleanest signature at e{sup +}e{sup -} colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at {radical}(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures.
Linear models for sound from supersonic reacting mixing layers
NASA Astrophysics Data System (ADS)
Chary, P. Shivakanth; Samanta, Arnab
2016-12-01
We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.
Linear Models Based on Noisy Data and the Frisch Scheme*
Ning, Lipeng; Georgiou, Tryphon T.; Tannenbaum, Allen; Boyd, Stephen P.
2016-01-01
We address the problem of identifying linear relations among variables based on noisy measurements. This is a central question in the search for structure in large data sets. Often a key assumption is that measurement errors in each variable are independent. This basic formulation has its roots in the work of Charles Spearman in 1904 and of Ragnar Frisch in the 1930s. Various topics such as errors-in-variables, factor analysis, and instrumental variables all refer to alternative viewpoints on this problem and on ways to account for the anticipated way that noise enters the data. In the present paper we begin by describing certain fundamental contributions by the founders of the field and provide alternative modern proofs to certain key results. We then go on to consider a modern viewpoint and novel numerical techniques to the problem. The central theme is expressed by the Frisch–Kalman dictum, which calls for identifying a noise contribution that allows a maximal number of simultaneous linear relations among the noise-free variables—a rank minimization problem. In the years since Frisch’s original formulation, there have been several insights, including trace minimization as a convenient heuristic to replace rank minimization. We discuss convex relaxations and theoretical bounds on the rank that, when met, provide guarantees for global optimality. A complementary point of view to this minimum-rank dictum is presented in which models are sought leading to a uniformly optimal quadratic estimation error for the error-free variables. Points of contact between these formalisms are discussed, and alternative regularization schemes are presented. PMID:27168672
Grotzinger, Andrew; Hildebrandt, Tom; Yu, Jessica
2016-01-01
Objective Change in binge eating is typically a primary outcome for interventions targeting individuals with eating pathology. A range of statistical models exist to handle these types of frequency distributions, but little empirical evidence exists to guide the appropriate choice of statistical model. Method Monte Carlo simulations were used to investigate the utility of semi-continuous models relative to continuous models in various situations relevant to binge eating treatment studies. Results Semi-continuous models yielded more accurate estimates of the population, while continuous models were higher powered when higher levels of missing data were present. Discussion The present findings generally support the use of semi-continuous models applied to binge eating data, with total sample sizes of roughly 200 being adequately powered to detect moderate treatment effects. However, models with a significant amount of missing data yielded more favorable power estimates for continuous models. PMID:25195793
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
Estimating population trends with a linear model: Technical comments
Sauer, John R.; Link, William A.; Royle, J. Andrew
2004-01-01
Controversy has sometimes arisen over whether there is a need to accommodate the limitations of survey design in estimating population change from the count data collected in bird surveys. Analyses of surveys such as the North American Breeding Bird Survey (BBS) can be quite complex; it is natural to ask if the complexity is necessary, or whether the statisticians have run amok. Bart et al. (2003) propose a very simple analysis involving nothing more complicated than simple linear regression, and contrast their approach with model-based procedures. We review the assumptions implicit to their proposed method, and document that these assumptions are unlikely to be valid for surveys such as the BBS. One fundamental limitation of a purely design-based approach is the absence of controls for factors that influence detection of birds at survey sites. We show that failure to model observer effects in survey data leads to substantial bias in estimation of population trends from BBS data for the 20 species that Bart et al. (2003) used as the basis of their simulations. Finally, we note that the simulations presented in Bart et al. (2003) do not provide a useful evaluation of their proposed method, nor do they provide a valid comparison to the estimating- equations alternative they consider.
Non linear dynamics of flame cusps: from experiments to modeling
NASA Astrophysics Data System (ADS)
Almarcha, Christophe; Radisson, Basile; Al-Sarraf, Elias; Quinard, Joel; Villermaux, Emmanuel; Denet, Bruno; Joulin, Guy
2016-11-01
The propagation of premixed flames in a medium initially at rest exhibits the appearance and competition of elementary local singularities called cusps. We investigate this problem both experimentally and numerically. An analytical solution of the two-dimensional Michelson Sivashinsky equation is obtained as a composition of pole solutions, which is compared with experimental flames fronts propagating between glass plates separated by a thin gap width. We demonstrate that the front dynamics can be reproduced numerically with a good accuracy, from the linear stages of destabilization to its late time evolution, using this model-equation. In particular, the model accounts for the experimentally observed steady distribution of distances between cusps, which is well-described by a one-parameter Gamma distribution, reflecting the aggregation type of interaction between the cusps. A modification of the Michelson Sivashinsky equation taking into account gravity allows to reproduce some other special features of these fronts. Aix-Marseille Univ., IRPHE, UMR 7342 CNRS, Centrale Marseille, Technopole de Château Gombert, 49 rue F. Joliot Curie, 13384 Marseille Cedex 13, France.
Linear System Models for Ultrasonic Imaging: Application to Signal Statistics
Zemp, Roger J.; Abbey, Craig K.; Insana, Michael F.
2009-01-01
Linear equations for modeling echo signals from shift-variant systems forming ultrasonic B-mode, Doppler, and strain images are analyzed and extended. The approach is based on a solution to the homogeneous wave equation for random inhomogeneous media. When the system is shift-variant, the spatial sensitivity function—defined as a spatial weighting function that determines the scattering volume for a fixed point of time—has advantages over the point-spread function traditionally used to analyze ultrasound systems. Spatial sensitivity functions are necessary for determining statistical moments in the context of rigorous image quality assessment, and they are time-reversed copies of point-spread functions for shift variant systems. A criterion is proposed to assess the validity of a local shift-invariance assumption. The analysis reveals realistic situations in which in-phase signals are correlated to the corresponding quadrature signals, which has strong implications for assessing lesion detectability. Also revealed is an opportunity to enhance near- and far-field spatial resolution by matched filtering unfocused beams. The analysis connects several well-known approaches to modeling ultrasonic echo signals. PMID:12839176
Gradient-based adaptation of continuous dynamic model structures
NASA Astrophysics Data System (ADS)
La Cava, William G.; Danai, Kourosh
2016-01-01
A gradient-based method of symbolic adaptation is introduced for a class of continuous dynamic models. The proposed model structure adaptation method starts with the first-principles model of the system and adapts its structure after adjusting its individual components in symbolic form. A key contribution of this work is its introduction of the model's parameter sensitivity as the measure of symbolic changes to the model. This measure, which is essential to defining the structural sensitivity of the model, not only accommodates algebraic evaluation of candidate models in lieu of more computationally expensive simulation-based evaluation, but also makes possible the implementation of gradient-based optimisation in symbolic adaptation. The proposed method is applied to models of several virtual and real-world systems that demonstrate its potential utility.
Wear-caused deflection evolution of a slide rail, considering linear and non-linear wear models
NASA Astrophysics Data System (ADS)
Kim, Dongwook; Quagliato, Luca; Park, Donghwi; Murugesan, Mohanraj; Kim, Naksoo; Hong, Seokmoo
2017-05-01
The research presented in this paper details an experimental-numerical approach for the quantitative correlation between wear and end-point deflection in a slide rail. Focusing the attention on slide rail utilized in white-goods applications, the aim is to evaluate the number of cycles the slide rail can operate, under different load conditions, before it should be replaced due to unacceptable end-point deflection. In this paper, two formulations are utilized to describe the wear: Archard model for the linear wear and Lemaitre damage model for the nonlinear wear. The linear wear gradually reduces the surface of the slide rail whereas the nonlinear one accounts for the surface element deletion (i.e. due to pitting). To determine the constants to use in the wear models, simple tension test and sliding wear test, by utilizing a designed and developed experiment machine, have been carried out. A full slide rail model simulation has been implemented in ABAQUS including both linear and non-linear wear models and the results have been compared with those of the real rails under different load condition, provided by the rail manufacturer. The comparison between numerically estimated and real rail results proved the reliability of the developed numerical model, limiting the error in a ±10% range. The proposed approach allows predicting the displacement vs cycle curves, parametrized for different loads and, based on a chosen failure criterion, to predict the lifetime of the rail.
A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon.
Moorthy, Arun S; Brooks, Stephen P J; Kalmokoff, Martin; Eberl, Hermann J
2015-01-01
A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed.
A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon
Moorthy, Arun S.; Brooks, Stephen P. J.; Kalmokoff, Martin; Eberl, Hermann J.
2015-01-01
A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed. PMID:26680208
Formal modeling and verification of fractional order linear systems.
Zhao, Chunna; Shi, Likun; Guan, Yong; Li, Xiaojuan; Shi, Zhiping
2016-05-01
This paper presents a formalization of a fractional order linear system in a higher-order logic (HOL) theorem proving system. Based on the formalization of the Grünwald-Letnikov (GL) definition, we formally specify and verify the linear and superposition properties of fractional order systems. The proof provides a rigor and solid underpinnings for verifying concrete fractional order linear control systems. Our implementation in HOL demonstrates the effectiveness of our approach in practical applications.
Optimal composite scores for longitudinal clinical trials under the linear mixed effects model.
Ard, M Colin; Raghavan, Nandini; Edland, Steven D
2015-01-01
Clinical trials of chronic, progressive conditions use rate of change on continuous measures as the primary outcome measure, with slowing of progression on the measure as evidence of clinical efficacy. For clinical trials with a single prespecified primary endpoint, it is important to choose an endpoint with the best signal-to-noise properties to optimize statistical power to detect a treatment effect. Composite endpoints composed of a linear weighted average of candidate outcome measures have also been proposed. Composites constructed as simple sums or averages of component tests, as well as composites constructed using weights derived from more sophisticated approaches, can be suboptimal, in some cases performing worse than individual outcome measures. We extend recent research on the construction of efficient linearly weighted composites by establishing the often overlooked connection between trial design and composite performance under linear mixed effects model assumptions and derive a formula for calculating composites that are optimal for longitudinal clinical trials of known, arbitrary design. Using data from a completed trial, we provide example calculations showing that the optimally weighted linear combination of scales can improve the efficiency of trials by almost 20% compared with the most efficient of the individual component scales. Additional simulations and analytical results demonstrate the potential losses in efficiency that can result from alternative published approaches to composite construction and explore the impact of weight estimation on composite performance. Copyright © 2015 John Wiley & Sons, Ltd.
Shortlist B: A Bayesian Model of Continuous Speech Recognition
ERIC Educational Resources Information Center
Norris, Dennis; McQueen, James M.
2008-01-01
A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward…
Shortlist B: A Bayesian Model of Continuous Speech Recognition
ERIC Educational Resources Information Center
Norris, Dennis; McQueen, James M.
2008-01-01
A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward…
The Continuous Improvement Model: A K-12 Literacy Focus
ERIC Educational Resources Information Center
Smith, Vicky B.
2013-01-01
The purpose of the study was to determine if the eight steps of the Continuous Improvement Model (CIM) provided a framework to raise achievement and to focus educators in identifying high-yield literacy strategies. This study sought to determine if an examination of the assessment data in reading revealed differences among schools that fully,…
A Planning System for Continuing Education Divisions: A Model.
ERIC Educational Resources Information Center
Bazik, Martha S.
1985-01-01
Details steps in a continuing education division planning model; i.e., define the planning group, develop a planning attitude, analyze internal and external environments, develop a mechanism for forecasting trends, hold planning sessions for determining strategic focus and operational plans, establish a timetable, hold follow-up/evaluation…
Teachers' Continuing Professional Development: Framing a Model of Outcomes
ERIC Educational Resources Information Center
Harland, John; Kinder, Kay
2014-01-01
In order to contribute towards the construction of an empirically-grounded theory of effective continuing professional development (CPD), this paper seeks to develop a model of the effects of teachers' CPD or in-service education and training (INSET). It builds on an earlier typology of INSET outcomes and compares it to two previous classification…
The Corporate University Model for Continuous Learning, Training and Development.
ERIC Educational Resources Information Center
El-Tannir, Akram A.
2002-01-01
Corporate universities typically convey corporate culture and provide systematic curriculum aimed at achieving strategic objectives. Virtual access and company-specific content combine to provide opportunities for continuous and active learning, a model that is becoming pervasive. (Contains 17 references.) (SK)
Promoting Continuous Quality Improvement in Online Teaching: The META Model
ERIC Educational Resources Information Center
Dittmar, Eileen; McCracken, Holly
2012-01-01
Experienced e-learning faculty members share strategies for implementing a comprehensive postsecondary faculty development program essential to continuous improvement of instructional skills. The high-impact META Model (centered around Mentoring, Engagement, Technology, and Assessment) promotes information sharing and content creation, and fosters…
A Model for Continuing Education for Special Librarians
ERIC Educational Resources Information Center
Kirk, Artemis Gargal
1976-01-01
Based on the needs that exist and the scarcity of programs in this area, a model for continuing education for the Special Libraries Association through which its members can arrange an educational program to suit their particular needs is proposed. (Author)
Recognition of Threshold Dose Model: Avoiding Continued Excessive Regulation
Logan, Stanley E.
1999-06-06
The purpose of this work is to examine the relationships between radiation dose-response models and associated regulations. It is concluded that recognition of the validity of a threshold model can be done on the basis of presently known data and that changes in regulations should be started at this time to avoid further unnecessary losses due to continued excessive regulation. As results from new research come in, refinement of interim values proposed in revised regulations can be incorporated.
NASA Astrophysics Data System (ADS)
Chen, Tao; Wu, Jun; Xu, Weiming; He, Zhiping; Qian, Liqun; Shu, Rong
2016-07-01
We have experimentally demonstrated a high power linearly polarized, dual wavelength frequency-modulated continuous-wave (FMCW) fiber laser with master-oscillator power-amplifier (MOPA) configuration, which is specially designed for simultaneous coherent distance and speed measurements. Two single longitudinal mode laser diodes working at 1550.12 and 1554.13 nm are employed as the seeds of the fiber MOPA. The wavelengths of the seeds are externally modulated by two acousto-optic frequency shifters (AOFSes) with a symmetrical sawtooth wave from 330-460 MHz in the frequency domain. The modulation periodicities for the two seeds are 26 and 26.3 μs, respectively, by which the distance ambiguity can be eliminated and therefore the detection range can be extended to a great extent. The seeds are then amplified independently to reduce their power differences during frequency modulation. After being coupled and boosted with three successive fiber amplifiers, an output power of 12.1 W is recorded from the FMCW laser with a power instability <0.14% over 1.5 h. The measured PER and full divergence angle of the laser are >18 dB and <25 μrad, respectively, indicating its excellent performance for field measurements.
Modeling Seismoacoustic Propagation from the Nonlinear to Linear Regimes
NASA Astrophysics Data System (ADS)
Chael, E. P.; Preston, L. A.
2015-12-01
Explosions at shallow depth-of-burial can cause nonlinear material response, such as fracturing and spalling, up to the ground surface above the shot point. These motions at the surface affect the generation of acoustic waves into the atmosphere, as well as the surface-reflected compressional and shear waves. Standard source scaling models for explosions do not account for such nonlinear interactions above the shot, while some recent studies introduce a non-isotropic addition to the moment tensor to represent them (e.g., Patton and Taylor, 2011). We are using Sandia's CTH shock physics code to model the material response in the vicinity of underground explosions, up to the overlying ground surface. Across a boundary where the motions have decayed to nearly linear behavior, we couple the signals from CTH into a linear finite-difference (FD) seismoacoustic code to efficiently propagate the wavefields to greater distances. If we assume only one-way transmission of energy through the boundary, then the particle velocities there suffice as inputs for the FD code, simplifying the specification of the boundary condition. The FD algorithm we use applies the wave equations for velocity in an elastic medium and pressure in an acoustic one, and matches the normal traction and displacement across the interface. Initially we are developing and testing a 2D, axisymmetric seismoacoustic routine; CTH can use this geometry in the source region as well. The Source Physics Experiment (SPE) in Nevada has collected seismic and acoustic data on numerous explosions at different scaled depths, providing an excellent testbed for investigating explosion phenomena (Snelson et al., 2013). We present simulations for shots SPE-4' and SPE-5, illustrating the importance of nonlinear behavior up to the ground surface. Our goal is to develop the capability for accurately predicting the relative signal strengths in the air and ground for a given combination of source yield and depth. Sandia National
Eaves, B.C.; Rothblum, U.G.
1990-08-01
A discounted-cost, continuous-time, infinite-horizon version of a flexible manufacturing and operator scheduling model is solved. The solution procedure is to convexify the discrete operator-assignment constraints to obtain a linear program, and then to regain the discreteness and obtain an approximate manufacturing schedule by deconvexification of the solution of the linear program over time. The strong features of the model are the accommodation of linear inequality relations among the manufacturing activities and the discrete manufacturing scheduling, whereas the weak features are intra-period relaxation of inventory availability constraints, and the absence of inventory costs, setup times, and setup charges.
Likert pain score modeling: a Markov integer model and an autoregressive continuous model.
Plan, E L; Elshoff, J-P; Stockis, A; Sargentini-Maier, M L; Karlsson, M O
2012-05-01
Pain intensity is principally assessed using rating scales such as the 11-point Likert scale. In general, frequent pain assessments are serially correlated and underdispersed. The aim of this investigation was to develop population models adapted to fit the 11-point pain scale. Daily Likert scores were recorded over 18 weeks by 231 patients with neuropathic pain from a clinical trial placebo group. An integer model consisting of a truncated generalized Poisson (GP) distribution with Markovian transition probability inflation was implemented in NONMEM 7.1.0. It was compared to a logit-transformed autoregressive continuous model with correlated residual errors. In both models, the score baseline was estimated to be 6.2 and the placebo effect to be 19%. Developed models similarly retrieved consistent underlying features of the data and therefore correspond to platform models for drug effect detection. The integer model was complex but flexible, whereas the continuous model can more easily be developed, although requires longer runtimes.
Functional linear models to test for differences in prairie wetland hydraulic gradients
Greenwood, Mark C.; Sojda, Richard S.; Preston, Todd M.; Swayne, David A.; Yang, Wanhong; Voinov, A.A.; Rizzoli, A.; Filatova, T.
2010-01-01
Functional data analysis provides a framework for analyzing multiple time series measured frequently in time, treating each series as a continuous function of time. Functional linear models are used to test for effects on hydraulic gradient functional responses collected from three types of land use in Northeastern Montana at fourteen locations. Penalized regression-splines are used to estimate the underlying continuous functions based on the discretely recorded (over time) gradient measurements. Permutation methods are used to assess the statistical significance of effects. A method for accommodating missing observations in each time series is described. Hydraulic gradients may be an initial and fundamental ecosystem process that responds to climate change. We suggest other potential uses of these methods for detecting evidence of climate change.
Scaling in a Continuous Time Model for Biological Aging
NASA Astrophysics Data System (ADS)
de Almeida, R. M. C.; Thomas, G. L.
In this paper, we consider a generalization to the asexual version of Penna model for biological aging, where we take a continuous time limit. The genotype associated to each individual is an interval of real numbers over which Dirac δ-functions are defined, representing genetically programmed diseases to be switched on at defined ages of the individual life. We discuss two different continuous limits for the evolution equation and two different mutation protocols, to be implemented during reproduction. Exact stationary solutions are obtained and scaling properties are discussed.
Process Setting through General Linear Model and Response Surface Method
NASA Astrophysics Data System (ADS)
Senjuntichai, Angsumalin
2010-10-01
The objective of this study is to improve the efficiency of the flow-wrap packaging process in soap industry through the reduction of defectives. At the 95% confidence level, with the regression analysis, the sealing temperature, temperatures of upper and lower crimper are found to be the significant factors for the flow-wrap process with respect to the number/percentage of defectives. Twenty seven experiments have been designed and performed according to three levels of each controllable factor. With the general linear model (GLM), the suggested values for the sealing temperature, temperatures of upper and lower crimpers are 185, 85 and 85° C, respectively while the response surface method (RSM) provides the optimal process conditions at 186, 89 and 88° C. Due to different assumptions between percentage of defective and all three temperature parameters, the suggested conditions from the two methods are then slightly different. Fortunately, the estimated percentage of defectives at 5.51% under GLM process condition and the predicted percentage of defectives at 4.62% under RSM process condition are not significant different. But at 95% confidence level, the percentage of defectives under RSM condition can be much lower approximately 2.16% than those under GLM condition in accordance with wider variation. Lastly, the percentages of defectives under the conditions suggested by GLM and RSM are reduced by 55.81% and 62.95%, respectively.
Amplitude relations in non-linear sigma model
NASA Astrophysics Data System (ADS)
Chen, Gang; Du, Yi-Jian
2014-01-01
In this paper, we investigate tree-level scattering amplitude relations in U( N) non-linear sigma model. We use Cayley parametrization. As was shown in the recent works [23,24], both on-shell amplitudes and off-shell currents with odd points have to vanish under Cayley parametrization. We prove the off-shell U(1) identity and fundamental BCJ relation for even-point currents. By taking the on-shell limits of the off-shell relations, we show that the color-ordered tree amplitudes with even points satisfy U(1)-decoupling identity and fundamental BCJ relation, which have the same formations within Yang-Mills theory. We further state that all the on-shell general KK, BCJ relations as well as the minimal-basis expansion are also satisfied by color-ordered tree amplitudes. As a consequence of the relations among color-ordered amplitudes, the total 2 m-point tree amplitudes satisfy DDM form of color decomposition as well as KLT relation.
Generalized linear model for estimation of missing daily rainfall data
NASA Astrophysics Data System (ADS)
Rahman, Nurul Aishah; Deni, Sayang Mohd; Ramli, Norazan Mohamed
2017-04-01
The analysis of rainfall data with no missingness is vital in various applications including climatological, hydrological and meteorological study. The issue of missing data is a serious concern since it could introduce bias and lead to misleading conclusions. In this study, five imputation methods including simple arithmetic average, normal ratio method, inverse distance weighting method, correlation coefficient weighting method and geographical coordinate were used to estimate the missing data. However, these imputation methods ignored the seasonality in rainfall dataset which could give more reliable estimation. Thus this study is aimed to estimate the missingness in daily rainfall data by using generalized linear model with gamma and Fourier series as the link function and smoothing technique, respectively. Forty years daily rainfall data for the period from 1975 until 2014 which consists of seven stations at Kelantan region were selected for the analysis. The findings indicated that the imputation methods could provide more accurate estimation values based on the least mean absolute error, root mean squared error and coefficient of variation root mean squared error when seasonality in the dataset are considered.
Markov Boundary Discovery with Ridge Regularized Linear Models
Visweswaran, Shyam
2016-01-01
Ridge regularized linear models (RRLMs), such as ridge regression and the SVM, are a popular group of methods that are used in conjunction with coefficient hypothesis testing to discover explanatory variables with a significant multivariate association to a response. However, many investigators are reluctant to draw causal interpretations of the selected variables due to the incomplete knowledge of the capabilities of RRLMs in causal inference. Under reasonable assumptions, we show that a modified form of RRLMs can get “very close” to identifying a subset of the Markov boundary by providing a worst-case bound on the space of possible solutions. The results hold for any convex loss, even when the underlying functional relationship is nonlinear, and the solution is not unique. Our approach combines ideas in Markov boundary and sufficient dimension reduction theory. Experimental results show that the modified RRLMs are competitive against state-of-the-art algorithms in discovering part of the Markov boundary from gene expression data. PMID:27170915
Investigating follow-up outcome change using hierarchical linear modeling.
Ogrodniczuk, J S; Piper, W E; Joyce, A S
2001-03-01
Individual change in outcome during a one-year follow-up period for 98 patients who received either interpretive or supportive psychotherapy was examined using hierarchical linear modeling (HLM). This followed a previous study that had investigated average (treatment condition) change during follow-up using traditional methods of data analysis (repeated measures ANOVA, chi-square tests). We also investigated whether two patient personality characteristics-quality of object relations (QOR) and psychological mindedness (PM)-predicted individual change. HLM procedures yielded findings that were not detected using traditional methods of data analysis. New findings indicated that the rate of individual change in outcome during follow-up varied significantly among the patients. QOR was directly related to favorable individual change for supportive therapy patients, but not for patients who received interpretive therapy. The findings have implications for determining which patients will show long-term benefit following short-term supportive therapy and how to enhance it. The study also found significant associations between QOR and final outcome level.
A queueing theory based model for business continuity in hospitals.
Miniati, R; Cecconi, G; Dori, F; Frosini, F; Iadanza, E; Biffi Gentili, G; Niccolini, F; Gusinu, R
2013-01-01
Clinical activities can be seen as results of precise and defined events' succession where every single phase is characterized by a waiting time which includes working duration and possible delay. Technology makes part of this process. For a proper business continuity management, planning the minimum number of devices according to the working load only is not enough. A risk analysis on the whole process should be carried out in order to define which interventions and extra purchase have to be made. Markov models and reliability engineering approaches can be used for evaluating the possible interventions and to protect the whole system from technology failures. The following paper reports a case study on the application of the proposed integrated model, including risk analysis approach and queuing theory model, for defining the proper number of device which are essential to guarantee medical activity and comply the business continuity management requirements in hospitals.
A continuous-time neural model for sequential action
Kachergis, George; Wyatte, Dean; O'Reilly, Randall C.; de Kleijn, Roy; Hommel, Bernhard
2014-01-01
Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. PMID:25267830
NASA Astrophysics Data System (ADS)
Rust, H. W.; Vrac, M.; Lengaigne, M.; Sultan, B.
2012-04-01
Changes in precipitation patterns with potentially less precipitation and an increasing risk for droughts pose a threat to water resources and agricultural yields in Senegal. Precipitation in this region is dominated by the West-African Monsoon being active from May to October, a seasonal pattern with inter-annual to decadal variability in the 20th century which is likely to be affected by climate change. We built a generalized linear model for a full spatial description of rainfall in Senegal. The model uses season, location, and a discrete set of weather types as predictors and yields a spatially continuous description of precipitation occurrences and intensities. Weather types have been defined on NCEP/NCAR reanalysis using zonal and meridional winds, as well as relative humidity. This model is suitable for downscaling precipitation, particularly precipitation occurrences relevant for drough risk mapping.
A log-linearized arterial viscoelastic model for evaluation of the carotid artery.
Hirano, Harutoyo; Horiuchi, Tetsuya; Kutluk, Abdugheni; Kurita, Yuichi; Ukawa, Teiji; Nakamura, Ryuji; Saeki, Noboru; Higashi, Yukihito; Kawamoto, Masashi; Yoshizumi, Masao; Tsuji, Toshio
2013-01-01
This paper proposes a method for qualitatively estimating the mechanical properties of arterial walls on a beat-to-beat basis through noninvasive measurement of continuous arterial pressure and arterial diameter using an ultrasonic device. First, in order to describe the nonlinear relationships linking arterial pressure waveforms and arterial diameter waveforms as well as the viscoelastic characteristics of arteries, we developed a second-order nonlinear model (called the log-linearized arterial viscoelastic model) to allow estimation of arterial wall viscoelasticity. Next, to verify the validity of the proposed method, the viscoelastic indices of the carotid artery were estimated. The results showed that the proposed model can be used to accurately approximate the mechanical properties of arterial walls. It was therefore deemed suitable for qualitative evaluation of arterial viscoelastic properties based on noninvasive measurement of arterial pressure and arterial diameter.
Surrogate model reduction for linear dynamic systems based on a frequency domain modal analysis
NASA Astrophysics Data System (ADS)
Kim, T.
2015-10-01
A novel model reduction methodology for linear dynamic systems with parameter variations is presented based on a frequency domain formulation and use of the proper orthogonal decomposition. For an efficient treatment of parameter variations, the system matrices are divided into a nominal and an incremental part. It is shown that the perturbed part is modally equivalent to a new system where the incremental matrices are isolated into the forcing term. To account for the continuous changes in the parameters, the single-composite-input is invoked with a finite number of predetermined incremental matrices. The frequency-domain Karhunen-Loeve procedure is used to calculate a rich set of basis modes accounting for the variations. For demonstration, the new procedure is applied to a finite element model of the Goland wing undergoing oscillations and shown to produce extremely accurate reduced-order surrogate model for a wide range of parameter variations.
Accurate bolus arrival time estimation using piecewise linear model fitting
NASA Astrophysics Data System (ADS)
Abdou, Elhassan; de Mey, Johan; De Ridder, Mark; Vandemeulebroucke, Jef
2017-02-01
Dynamic contrast-enhanced computed tomography (DCE-CT) is an emerging radiological technique, which consists in acquiring a rapid sequence of CT images, shortly after the injection of an intravenous contrast agent. The passage of the contrast agent in a tissue results in a varying CT intensity over time, recorded in time-attenuation curves (TACs), which can be related to the contrast supplied to that tissue via the supplying artery to estimate the local perfusion and permeability characteristics. The time delay between the arrival of the contrast bolus in the feeding artery and the tissue of interest, called the bolus arrival time (BAT), needs to be determined accurately to enable reliable perfusion analysis. Its automated identification is however highly sensitive to noise. We propose an accurate and efficient method for estimating the BAT from DCE-CT images. The method relies on a piecewise linear TAC model with four segments and suitable parameter constraints for limiting the range of possible values. The model is fitted to the acquired TACs in a multiresolution fashion using an iterative optimization approach. The performance of the method was evaluated on simulated and real perfusion data of lung and rectum tumours. In both cases, the method was found to be stable, leading to average accuracies in the order of the temporal resolution of the dynamic sequence. For reasonable levels of noise, the results were found to be comparable to those obtained using a previously proposed method, employing a full search algorithm, but requiring an order of magnitude more computation time.