ERIC Educational Resources Information Center
Rocconi, Louis M.
2011-01-01
Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…
Henry, B I; Langlands, T A M; Wearne, S L
2006-09-01
We have revisited the problem of anomalously diffusing species, modeled at the mesoscopic level using continuous time random walks, to include linear reaction dynamics. If a constant proportion of walkers are added or removed instantaneously at the start of each step then the long time asymptotic limit yields a fractional reaction-diffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps then the long time asymptotic limit has a standard linear reaction kinetics term but a fractional order temporal derivative operating on a nonstandard diffusion term. Results from the above two models are compared with a phenomenological model with standard linear reaction kinetics and a fractional order temporal derivative operating on a standard diffusion term. We have also developed further extensions of the CTRW model to include more general reaction dynamics.
Investigating Integer Restrictions in Linear Programming
ERIC Educational Resources Information Center
Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.
2015-01-01
Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…
How does non-linear dynamics affect the baryon acoustic oscillation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugiyama, Naonori S.; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: dns@astro.princeton.edu
2014-02-01
We study the non-linear behavior of the baryon acoustic oscillation in the power spectrum and the correlation function by decomposing the dark matter perturbations into the short- and long-wavelength modes. The evolution of the dark matter fluctuations can be described as a global coordinate transformation caused by the long-wavelength displacement vector acting on short-wavelength matter perturbation undergoing non-linear growth. Using this feature, we investigate the well known cancellation of the high-k solutions in the standard perturbation theory. While the standard perturbation theory naturally satisfies the cancellation of the high-k solutions, some of the recently proposed improved perturbation theories do notmore » guarantee the cancellation. We show that this cancellation clarifies the success of the standard perturbation theory at the 2-loop order in describing the amplitude of the non-linear power spectrum even at high-k regions. We propose an extension of the standard 2-loop level perturbation theory model of the non-linear power spectrum that more accurately models the non-linear evolution of the baryon acoustic oscillation than the standard perturbation theory. The model consists of simple and intuitive parts: the non-linear evolution of the smoothed power spectrum without the baryon acoustic oscillations and the non-linear evolution of the baryon acoustic oscillations due to the large-scale velocity of dark matter and due to the gravitational attraction between dark matter particles. Our extended model predicts the smoothing parameter of the baryon acoustic oscillation peak at z = 0.35 as ∼ 7.7Mpc/h and describes the small non-linear shift in the peak position due to the galaxy random motions.« less
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
A Constrained Linear Estimator for Multiple Regression
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.
2010-01-01
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Vanilla technicolor at linear colliders
NASA Astrophysics Data System (ADS)
Frandsen, Mads T.; Järvinen, Matti; Sannino, Francesco
2011-08-01
We analyze the reach of linear colliders for models of dynamical electroweak symmetry breaking. We show that linear colliders can efficiently test the compositeness scale, identified with the mass of the new spin-one resonances, until the maximum energy in the center of mass of the colliding leptons. In particular we analyze the Drell-Yan processes involving spin-one intermediate heavy bosons decaying either leptonically or into two standard model gauge bosons. We also analyze the light Higgs production in association with a standard model gauge boson stemming also from an intermediate spin-one heavy vector.
Computing Linear Mathematical Models Of Aircraft
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1991-01-01
Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.
Composite Linear Models | Division of Cancer Prevention
By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.
On the linear relation between the mean and the standard deviation of a response time distribution.
Wagenmakers, Eric-Jan; Brown, Scott
2007-07-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.
Modelling daily water temperature from air temperature for the Missouri River.
Zhu, Senlin; Nyarko, Emmanuel Karlo; Hadzima-Nyarko, Marijana
2018-01-01
The bio-chemical and physical characteristics of a river are directly affected by water temperature, which thereby affects the overall health of aquatic ecosystems. It is a complex problem to accurately estimate water temperature. Modelling of river water temperature is usually based on a suitable mathematical model and field measurements of various atmospheric factors. In this article, the air-water temperature relationship of the Missouri River is investigated by developing three different machine learning models (Artificial Neural Network (ANN), Gaussian Process Regression (GPR), and Bootstrap Aggregated Decision Trees (BA-DT)). Standard models (linear regression, non-linear regression, and stochastic models) are also developed and compared to machine learning models. Analyzing the three standard models, the stochastic model clearly outperforms the standard linear model and nonlinear model. All the three machine learning models have comparable results and outperform the stochastic model, with GPR having slightly better results for stations No. 2 and 3, while BA-DT has slightly better results for station No. 1. The machine learning models are very effective tools which can be used for the prediction of daily river temperature.
ERIC Educational Resources Information Center
Li, Deping; Oranje, Andreas
2007-01-01
Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…
Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.
Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray
2017-07-11
Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.
Frequency-domain full-waveform inversion with non-linear descent directions
NASA Astrophysics Data System (ADS)
Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.
2018-05-01
Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
An improved null model for assessing the net effects of multiple stressors on communities.
Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D
2018-01-01
Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.
Null tests of the standard model using the linear model formalism
NASA Astrophysics Data System (ADS)
Marra, Valerio; Sapone, Domenico
2018-04-01
We test both the Friedmann-Lemaître-Robertson-Walker geometry and Λ CDM cosmology in a model-independent way by reconstructing the Hubble function H (z ), the comoving distance D (z ), and the growth of structure f σ8(z ) using the most recent data available. We use the linear model formalism in order to optimally reconstruct the above cosmological functions, together with their derivatives and integrals. We then evaluate four of the null tests available in the literature that probe both background and perturbation assumptions. For all the four tests, we find agreement, within the errors, with the standard cosmological model.
Linear and non-linear perturbations in dark energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escamilla-Rivera, Celia; Casarini, Luciano; Fabris, Júlio C.
2016-11-01
In this work we discuss observational aspects of three time-dependent parameterisations of the dark energy equation of state w ( z ). In order to determine the dynamics associated with these models, we calculate their background evolution and perturbations in a scalar field representation. After performing a complete treatment of linear perturbations, we also show that the non-linear contribution of the selected w ( z ) parameterisations to the matter power spectra is almost the same for all scales, with no significant difference from the predictions of the standard ΛCDM model.
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
Stevanovic, Stefan; Pervan, Boris
2018-01-01
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250
Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer
2016-01-01
The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein
2018-09-15
A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2 = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.
An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling
ERIC Educational Resources Information Center
Atas, Dogu; Karadag, Özge
2017-01-01
In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…
A Standard for RF Modulation Factor,
1979-09-01
Mathematics of Physics and Chemistry, pp. 474-477 (D. Van Nostrand Co., Inc., New York, N.Y., 1943). [23] Graybill , F. A., An Introduction to Linear ...circuit model . The primary limitation on the quadratic technique is the linearity and bandwidth of the analog multiplier. A high speed (5 MHz...o ...... . ..... 39 7.2.1. Nonlinearity Model ............................................... 41 7.2.2. Model Parameters
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
An Evaluation of the Automated Cost Estimating Integrated Tools (ACEIT) System
1989-09-01
residual and it is described as the residual divided by its standard deviation (13:App A,17). Neter, Wasserman, and Kutner, in Applied Linear Regression Models...others. Applied Linear Regression Models. Homewood IL: Irwin, 1983. 19. Raduchel, William J. "A Professional’s Perspective on User-Friendliness," Byte
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
A model for rotorcraft flying qualities studies
NASA Technical Reports Server (NTRS)
Mittal, Manoj; Costello, Mark F.
1993-01-01
This paper outlines the development of a mathematical model that is expected to be useful for rotorcraft flying qualities research. A computer model is presented that can be applied to a range of different rotorcraft configurations. The algorithm computes vehicle trim and a linear state-space model of the aircraft. The trim algorithm uses non linear optimization theory to solve the nonlinear algebraic trim equations. The linear aircraft equations consist of an airframe model and a flight control system dynamic model. The airframe model includes coupled rotor and fuselage rigid body dynamics and aerodynamics. The aerodynamic model for the rotors utilizes blade element theory and a three state dynamic inflow model. Aerodynamics of the fuselage and fuselage empennages are included. The linear state-space description for the flight control system is developed using standard block diagram data.
Linearization: Students Forget the Operating Point
ERIC Educational Resources Information Center
Roubal, J.; Husek, P.; Stecha, J.
2010-01-01
Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…
NASA Technical Reports Server (NTRS)
Hubeny, I.; Lanz, T.
1995-01-01
A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.
Portfolio optimization by using linear programing models based on genetic algorithm
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
On transient rheology and glacial isostasy
NASA Technical Reports Server (NTRS)
Yuen, David A.; Sabadini, Roberto C. A.; Gasperini, Paolo; Boschi, Enzo
1986-01-01
The effect of transient creep on the inference of long-term mantle viscosity is investigated using theoretical predictions from self-gravitating, layered earth models with Maxwell, Burgers' body, and standard linear solid rheologies. The interaction between transient and steady-state rheologies is studied. The responses of the standard linear solid and Burgers' body models to transient creep in the entire mantle, and of the Burgers' body and Maxwell models to creep in the lower mantle are described. The models' responses are examined in terms of the surface displacement, free air gravity anomaly, wander of the rotation pole, and the secular variation of the degree 2 zonal coefficient of the earth's gravitational potential field. The data reveal that transient creep cannot operate throughout the entire mantle.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. T. Clark; M. J. Russell; R. E. Spears
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less
Satisfying Friendship Maintenance Expectations: The Role of Friendship Standards and Biological Sex
ERIC Educational Resources Information Center
Hall, Jeffrey A.; Larson, Kiley A.; Watts, Amber
2011-01-01
The ideal standards model predicts linear relationship among friendship standards, expectation fulfillment, and relationship satisfaction. Using a diary method, 197 participants reported on expectation fulfillment in interactions with one best, one close, and one casual friend (N = 591) over five days (2,388 interactions). Using multilevel…
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.
Classical Testing in Functional Linear Models.
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.
Classical Testing in Functional Linear Models
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155
Improving the Power of GWAS and Avoiding Confounding from Population Stratification with PC-Select
Tucker, George; Price, Alkes L.; Berger, Bonnie
2014-01-01
Using a reduced subset of SNPs in a linear mixed model can improve power for genome-wide association studies, yet this can result in insufficient correction for population stratification. We propose a hybrid approach using principal components that does not inflate statistics in the presence of population stratification and improves power over standard linear mixed models. PMID:24788602
Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne
2016-04-01
Existing evidence suggests that ambient ultrafine particles (UFPs) (<0.1µm) may contribute to acute cardiorespiratory morbidity. However, few studies have examined the long-term health effects of these pollutants owing in part to a need for exposure surfaces that can be applied in large population-based studies. To address this need, we developed a land use regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhawan, Suhail; Goobar, Ariel; Mörtsell, Edvard
Recent re-calibration of the Type Ia supernova (SNe Ia) magnitude-redshift relation combined with cosmic microwave background (CMB) and baryon acoustic oscillation (BAO) data have provided excellent constraints on the standard cosmological model. Here, we examine particular classes of alternative cosmologies, motivated by various physical mechanisms, e.g. scalar fields, modified gravity and phase transitions to test their consistency with observations of SNe Ia and the ratio of the angular diameter distances from the CMB and BAO. Using a model selection criterion for a relative comparison of the models (the Bayes Factor), we find moderate to strong evidence that the data prefermore » flat ΛCDM over models invoking a thawing behaviour of the quintessence scalar field. However, some exotic models like the growing neutrino mass cosmology and vacuum metamorphosis still present acceptable evidence values. The bimetric gravity model with only the linear interaction term as well as a simplified Galileon model can be ruled out by the combination of SNe Ia and CMB/BAO datasets whereas the model with linear and quadratic interaction terms has a comparable evidence value to standard ΛCDM. Thawing models are found to have significantly poorer evidence compared to flat ΛCDM cosmology under the assumption that the CMB compressed likelihood provides an adequate description for these non-standard cosmologies. We also present estimates for constraints from future data and find that geometric probes from oncoming surveys can put severe limits on non-standard cosmological models.« less
Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads
NASA Astrophysics Data System (ADS)
Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank
2017-09-01
In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
On neural networks in identification and control of dynamic systems
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Hyland, David C.
1993-01-01
This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Mathematical Modeling of Chemical Stoichiometry
ERIC Educational Resources Information Center
Croteau, Joshua; Fox, William P.; Varazo, Kristofoland
2007-01-01
In beginning chemistry classes, students are taught a variety of techniques for balancing chemical equations. The most common method is inspection. This paper addresses using a system of linear mathematical equations to solve for the stoichiometric coefficients. Many linear algebra books carry the standard balancing of chemical equations as an…
Quantifying relative importance: Computing standardized effects in models with binary outcomes
Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.
2018-01-01
Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
NASA Astrophysics Data System (ADS)
Boehm, Holger F.; Link, Thomas M.; Monetti, Roberto A.; Mueller, Dirk; Rummeny, Ernst J.; Raeth, Christoph W.
2005-04-01
Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities (high resolution MRI, micro-CT) are capable of depicting structural details of trabecular bone tissue. From the image data, structural properties obtained by quantitative measures are analysed with respect to the presence of osteoporotic fractures of the spine (in-vivo) or correlated with biomechanical strength as derived from destructive testing (in-vitro). Fairly well established are linear structural measures in 2D that are originally adopted from standard histo-morphometry. Recently, non-linear techniques in 2D and 3D based on the scaling index method (SIM), the standard Hough transform (SHT), and the Minkowski Functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. In this contribution, we generate models of trabecular bone with pre-defined structural properties which are exposed to simulated osteoclastic activity. We apply linear and non-linear texture measures to the models and analyse their performance with respect to detecting architectural changes. This study demonstrates, that the texture measures are capable of monitoring structural changes of complex model data. The diagnostic potential varies for the different parameters and is found to depend on the topological composition of the model and initial "bone density". In our models, non-linear texture measures tend to react more sensitively to small structural changes than linear measures. Best performance is observed for the 3rd and 4th Minkowski Functionals and for the scaling index method.
Identifying fMRI Model Violations with Lagrange Multiplier Tests
Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor
2013-01-01
The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665
Physics at the e⁺e⁻ linear collider
Moortgat-Picka, G.; Kronfeld, A. S.
2015-08-14
A comprehensive review of physics at an e⁺e⁻ linear collider in the energy range of √s = 92 GeV–3 TeV is presented in view of recent and expected LHC results, experiments from low-energy as well as astroparticle physics. The report focuses in particular on Higgs-boson, top-quark and electroweak precision physics, but also discusses several models of beyond the standard model physics such as supersymmetry, little Higgs models and extra gauge bosons. The connection to cosmology has been analysed as well.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V
2014-11-30
We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
An implicit boundary integral method for computing electric potential of macromolecules in solvent
NASA Astrophysics Data System (ADS)
Zhong, Yimin; Ren, Kui; Tsai, Richard
2018-04-01
A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.
Validating the applicability of the GUM procedure
NASA Astrophysics Data System (ADS)
Cox, Maurice G.; Harris, Peter M.
2014-08-01
This paper is directed at practitioners seeking a degree of assurance in the quality of the results of an uncertainty evaluation when using the procedure in the Guide to the Expression of Uncertainty in Measurement (GUM) (JCGM 100 : 2008). Such assurance is required in adhering to general standards such as International Standard ISO/IEC 17025 or other sector-specific standards. We investigate the extent to which such assurance can be given. For many practical cases, a measurement result incorporating an evaluated uncertainty that is correct to one significant decimal digit would be acceptable. Any quantification of the numerical precision of an uncertainty statement is naturally relative to the adequacy of the measurement model and the knowledge used of the quantities in that model. For general univariate and multivariate measurement models, we emphasize the use of a Monte Carlo method, as recommended in GUM Supplements 1 and 2. One use of this method is as a benchmark in terms of which measurement results provided by the GUM can be assessed in any particular instance. We mainly consider measurement models that are linear in the input quantities, or have been linearized and the linearization process is deemed to be adequate. When the probability distributions for those quantities are independent, we indicate the use of other approaches such as convolution methods based on the fast Fourier transform and, particularly, Chebyshev polynomials as benchmarks.
Correcting for population structure and kinship using the linear mixed model: theory and extensions.
Hoffman, Gabriel E
2013-01-01
Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.
2013-08-01
The construction of stable reduced order models using Galerkin projection for the Euler or Navier-Stokes equations requires a suitable choice for the inner product. The standard L2 inner product is expected to produce unstable ROMs. For the non-linear Navier-Stokes equations this means the use of an energy inner product. In this report, Galerkin projection for the non-linear Navier-Stokes equations using the L2 inner product is implemented as a first step toward constructing stable ROMs for this set of physics.
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
Probing kinematics and fate of the Universe with linearly time-varying deceleration parameter
NASA Astrophysics Data System (ADS)
Akarsu, Özgür; Dereli, Tekin; Kumar, Suresh; Xu, Lixin
2014-02-01
The parametrizations q = q 0+ q 1 z and q = q 0+ q 1(1 - a/ a 0) (Chevallier-Polarski-Linder parametrization) of the deceleration parameter, which are linear in cosmic redshift z and scale factor a , have been frequently utilized in the literature to study the kinematics of the Universe. In this paper, we follow a strategy that leads to these two well-known parametrizations of the deceleration parameter as well as an additional new parametrization, q = q 0+ q 1(1 - t/ t 0), which is linear in cosmic time t. We study the features of this linearly time-varying deceleration parameter in contrast with the other two linear parametrizations. We investigate in detail the kinematics of the Universe by confronting the three models with the latest observational data. We further study the dynamics of the Universe by considering the linearly time-varying deceleration parameter model in comparison with the standard ΛCDM model. We also discuss the future of the Universe in the context of the models under consideration.
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
A penalized framework for distributed lag non-linear models.
Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G
2017-09-01
Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Liu, X T; Ma, W F; Zeng, X F; Xie, C Y; Thacker, P A; Htoo, J K; Qiao, S Y
2015-10-01
Four 28-d experiments were conducted to determine the standardized ileal digestible (SID) valine (Val) to lysine (Lys) ratio required for 26- to 46- (Exp. 1), 49- to 70- (Exp. 2), 71- to 92- (Exp. 3), and 94- to 119-kg (Exp. 4) pigs fed low CP diets supplemented with crystalline AA. The first 3 experiments utilized 150 pigs (Duroc × Landrace × Large White), while Exp. 4 utilized 90 finishing pigs. Pigs in all 4 experiments were randomly allocated to 1 of 5 diets with 6 pens per treatment (3 pens of barrows and 3 pens of gilts) and 5 pigs per pen for the first 3 experiments and 3 pigs per pen for Exp. 4. Diets for all experiments were formulated to contain SID Val to Lys ratios of 0.55, 0.60, 0.65, 0.70, or 0.75. In Exp. 1 (26 to 46 kg), ADG increased (linear, = 0.039; quadratic, = 0.042) with an increasing dietary Val:Lys ratio. The SID Val:Lys ratio to maximize ADG was 0.62 using a linear broken-line model and 0.71 using a quadratic model. In Exp. 2 (49 to 70 kg), ADG increased (linear, = 0.021; quadratic, = 0.042) as the SID Val:Lys ratio increased. G:F improved (linear, = 0.039) and serum urea nitrogen (SUN) decreased (linear, = 0.021; quadratic, = 0.024) with an increased SID Val:Lys ratio. The SID Val:Lys ratios to maximize ADG as well as to minimize SUN levels were 0.67 and 0.65, respectively, using a linear broken-line model and 0.72 and 0.71, respectively, using a quadratic model. In Exp. 3 (71 to 92 kg), ADG increased (linear, = 0.007; quadratic, = 0.022) and SUN decreased (linear, = 0.011; quadratic, = 0.034) as the dietary SID Val:Lys ratio increased. The SID Val:Lys ratios to maximize ADG as well as to minimize SUN levels were 0.67 and 0.67, respectively, using a linear broken-line model and 0.72 and 0.74, respectively, using a quadratic model. In Exp. 4 (94 to 119 kg), ADG increased (linear, = 0.041) and G:F was improved (linear, = 0.004; quadratic, = 0.005) as the dietary SID Val:Lys ratio increased. The SID Val:Lys ratio to maximize G:F was 0.68 using a linear broken-line model and 0.72 using a quadratic model. Carcass traits and muscle quality were not influenced by SID Val:Lys ratio. In conclusion, the dietary SID Val:Lys ratios required for 26- to 46-, 49- to 70-, 71- to 92-, and 94- to 119-kg pigs were estimated to be 0.62, 0.66, 0.67, and 0.68, respectively, using a linear broken-line model and 0.71, 0.72, 0.73, and 0.72, respectively, using a quadratic model.
A 1-D model of the nonlinear dynamics of the human lumbar intervertebral disc
NASA Astrophysics Data System (ADS)
Marini, Giacomo; Huber, Gerd; Püschel, Klaus; Ferguson, Stephen J.
2017-01-01
Lumped parameter models of the spine have been developed to investigate its response to whole body vibration. However, these models assume the behaviour of the intervertebral disc to be linear-elastic. Recently, the authors have reported on the nonlinear dynamic behaviour of the human lumbar intervertebral disc. This response was shown to be dependent on the applied preload and amplitude of the stimuli. However, the mechanical properties of a standard linear elastic model are not dependent on the current deformation state of the system. The aim of this study was therefore to develop a model that is able to describe the axial, nonlinear quasi-static response and to predict the nonlinear dynamic characteristics of the disc. The ability to adapt the model to an individual disc's response was a specific focus of the study, with model validation performed against prior experimental data. The influence of the numerical parameters used in the simulations was investigated. The developed model exhibited an axial quasi-static and dynamic response, which agreed well with the corresponding experiments. However, the model needs further improvement to capture additional peculiar characteristics of the system dynamics, such as the change of mean point of oscillation exhibited by the specimens when oscillating in the region of nonlinear resonance. Reference time steps were identified for specific integration scheme. The study has demonstrated that taking into account the nonlinear-elastic behaviour typical of the intervertebral disc results in a predicted system oscillation much closer to the physiological response than that provided by linear-elastic models. For dynamic analysis, the use of standard linear-elastic models should be avoided, or restricted to study cases where the amplitude of the stimuli is relatively small.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
NASA Astrophysics Data System (ADS)
McCaskill, John
There can be large spatial and temporal separation of cause and effect in policy making. Determining the correct linkage between policy inputs and outcomes can be highly impractical in the complex environments faced by policy makers. In attempting to see and plan for the probable outcomes, standard linear models often overlook, ignore, or are unable to predict catastrophic events that only seem improbable due to the issue of multiple feedback loops. There are several issues with the makeup and behaviors of complex systems that explain the difficulty many mathematical models (factor analysis/structural equation modeling) have in dealing with non-linear effects in complex systems. This chapter highlights those problem issues and offers insights to the usefulness of ABM in dealing with non-linear effects in complex policy making environments.
Modelling non-linear effects of dark energy
NASA Astrophysics Data System (ADS)
Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis
2018-04-01
We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping
2014-05-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.
Solares, Santiago D
2014-01-01
This paper presents computational simulations of single-mode and bimodal atomic force microscopy (AFM) with particular focus on the viscoelastic interactions occurring during tip-sample impact. The surface is modeled by using a standard linear solid model, which is the simplest system that can reproduce creep compliance and stress relaxation, which are fundamental behaviors exhibited by viscoelastic surfaces. The relaxation of the surface in combination with the complexities of bimodal tip-sample impacts gives rise to unique dynamic behaviors that have important consequences with regards to the acquisition of quantitative relationships between the sample properties and the AFM observables. The physics of the tip-sample interactions and its effect on the observables are illustrated and discussed, and a brief research outlook on viscoelasticity measurement with intermittent-contact AFM is provided.
Credibility analysis of risk classes by generalized linear model
NASA Astrophysics Data System (ADS)
Erdemir, Ovgucan Karadag; Sucu, Meral
2016-06-01
In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.
Physics at the [Formula: see text] linear collider.
Moortgat-Pick, G; Baer, H; Battaglia, M; Belanger, G; Fujii, K; Kalinowski, J; Heinemeyer, S; Kiyo, Y; Olive, K; Simon, F; Uwer, P; Wackeroth, D; Zerwas, P M; Arbey, A; Asano, M; Bagger, J; Bechtle, P; Bharucha, A; Brau, J; Brümmer, F; Choi, S Y; Denner, A; Desch, K; Dittmaier, S; Ellwanger, U; Englert, C; Freitas, A; Ginzburg, I; Godfrey, S; Greiner, N; Grojean, C; Grünewald, M; Heisig, J; Höcker, A; Kanemura, S; Kawagoe, K; Kogler, R; Krawczyk, M; Kronfeld, A S; Kroseberg, J; Liebler, S; List, J; Mahmoudi, F; Mambrini, Y; Matsumoto, S; Mnich, J; Mönig, K; Mühlleitner, M M; Pöschl, R; Porod, W; Porto, S; Rolbiecki, K; Schmitt, M; Serpico, P; Stanitzki, M; Stål, O; Stefaniak, T; Stöckinger, D; Weiglein, G; Wilson, G W; Zeune, L; Moortgat, F; Xella, S; Bagger, J; Brau, J; Ellis, J; Kawagoe, K; Komamiya, S; Kronfeld, A S; Mnich, J; Peskin, M; Schlatter, D; Wagner, A; Yamamoto, H
A comprehensive review of physics at an [Formula: see text] linear collider in the energy range of [Formula: see text] GeV-3 TeV is presented in view of recent and expected LHC results, experiments from low-energy as well as astroparticle physics. The report focusses in particular on Higgs-boson, top-quark and electroweak precision physics, but also discusses several models of beyond the standard model physics such as supersymmetry, little Higgs models and extra gauge bosons. The connection to cosmology has been analysed as well.
Appraisal of jump distributions in ensemble-based sampling algorithms
NASA Astrophysics Data System (ADS)
Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo
2017-04-01
Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.
Some Questions Concerning the Standards of External Examinations.
ERIC Educational Resources Information Center
Kahn, Michael J.
1990-01-01
Variance as a function of time is described for the Cambridge Local Examinations Syndicate's examination standards, with emphasis on the performance of candidates from Botswana and Zimbabwe. Results demonstrate the value of simple linear modeling in extracting performance trends for a range of subjects over time across six countries. (TJH)
NASA Astrophysics Data System (ADS)
DeGrandchamp, Joseph B.; Whisenant, Jennifer G.; Arlinghaus, Lori R.; Abramson, V. G.; Yankeelov, Thomas E.; Cárdenas-Rodríguez, Julio
2016-03-01
The pharmacokinetic parameters derived from dynamic contrast enhanced (DCE) MRI have shown promise as biomarkers for tumor response to therapy. However, standard methods of analyzing DCE MRI data (Tofts model) require high temporal resolution, high signal-to-noise ratio (SNR), and the Arterial Input Function (AIF). Such models produce reliable biomarkers of response only when a therapy has a large effect on the parameters. We recently reported a method that solves the limitations, the Linear Reference Region Model (LRRM). Similar to other reference region models, the LRRM needs no AIF. Additionally, the LRRM is more accurate and precise than standard methods at low SNR and slow temporal resolution, suggesting LRRM-derived biomarkers could be better predictors. Here, the LRRM, Non-linear Reference Region Model (NRRM), Linear Tofts model (LTM), and Non-linear Tofts Model (NLTM) were used to estimate the RKtrans between muscle and tumor (or the Ktrans for Tofts) and the tumor kep,TOI for 39 breast cancer patients who received neoadjuvant chemotherapy (NAC). These parameters and the receptor statuses of each patient were used to construct cross-validated predictive models to classify patients as complete pathological responders (pCR) or non-complete pathological responders (non-pCR) to NAC. Model performance was evaluated using area under the ROC curve (AUC). The AUC for receptor status alone was 0.62, while the best performance using predictors from the LRRM, NRRM, LTM, and NLTM were AUCs of 0.79, 0.55, 0.60, and 0.59 respectively. This suggests that the LRRM can be used to predict response to NAC in breast cancer.
Emergent Modelling: From Traditional Indonesian Games to a Standard Unit of Measurement
ERIC Educational Resources Information Center
Wijaya, Ariyadi; Doorman, L. Michiel; Keijze, Ronald
2011-01-01
In this paper, we describe the way in which traditional Indonesian games can support the learning of linear measurement. Previous research has revealed that young children tend to perform measurement as an instrumental procedure. This tendency may be due to the way in which linear measurement has been taught as an isolated concept, which is…
Vascular mechanics of the coronary artery
NASA Technical Reports Server (NTRS)
Veress, A. I.; Vince, D. G.; Anderson, P. M.; Cornhill, J. F.; Herderick, E. E.; Klingensmith, J. D.; Kuban, B. D.; Greenberg, N. L.; Thomas, J. D.
2000-01-01
This paper describes our research into the vascular mechanics of the coronary artery and plaque. The three sections describe the determination of arterial mechanical properties using intravascular ultrasound (IVUS), a constitutive relation for the arterial wall, and finite element method (FEM) models of the arterial wall and atheroma. METHODS: Inflation testing of porcine left anterior descending coronary arteries was conducted. The changes in the vessel geometry were monitored using IVUS, and intracoronary pressure was recorded using a pressure transducer. The creep and quasistatic stress/strain responses were determined. A Standard Linear Solid (SLS) was modified to reproduce the non-linear elastic behavior of the arterial wall. This Standard Non-linear Solid (SNS) was implemented into an axisymetric thick-walled cylinder numerical model. Finite element analysis models were created for five age groups and four levels of stenosis using the Pathobiological Determinants of Atherosclerosis Youth (PDAY) database. RESULTS: The arteries exhibited non-linear elastic behavior. The total tissue creep strain was epsilon creep = 0.082 +/- 0.018 mm/mm. The numerical model could reproduce both the non-linearity of the porcine data and time dependent behavior of the arterial wall found in the literature with a correlation coefficient of 0.985. Increasing age had a strong positive correlation with the shoulder stress level, (r = 0.95). The 30% stenosis had the highest shoulder stress due to the combination of a fully formed lipid pool and a thin cap. CONCLUSIONS: Studying the solid mechanics of the arterial wall and the atheroma provide important insights into the mechanisms involved in plaque rupture.
Modeling of Heat Transfer in Rooms in the Modelica "Buildings" Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Zuo, Wangda; Nouidui, Thierry Stephane
This paper describes the implementation of the room heat transfer model in the free open-source Modelica \\Buildings" library. The model can be used as a single room or to compose a multizone building model. We discuss how the model is decomposed into submodels for the individual heat transfer phenomena. We also discuss the main physical assumptions. The room model can be parameterized to use different modeling assumptions, leading to linear or non-linear differential algebraic systems of equations. We present numerical experiments that show how these assumptions affect computing time and accuracy for selected cases of the ANSI/ASHRAE Standard 140- 2007more » envelop validation tests.« less
Model predictive control of P-time event graphs
NASA Astrophysics Data System (ADS)
Hamri, H.; Kara, R.; Amari, S.
2016-12-01
This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
Galactic chemical evolution and nucleocosmochronology - Standard model with terminated infall
NASA Technical Reports Server (NTRS)
Clayton, D. D.
1984-01-01
Some exactly soluble families of models for the chemical evolution of the Galaxy are presented. The parameters considered include gas mass, the age-metallicity relation, the star mass vs. metallicity, the age distribution, and the mean age of dwarfs. A short BASIC program for calculating these parameters is given. The calculation of metallicity gradients, nuclear cosmochronology, and extinct radioactivities is addressed. An especially simple, mathematically linear model is recommended as a standard model of galaxies with truncated infall due to its internal consistency and compact display of the physical effects of the parameters.
Study on Standard Fatigue Vehicle Load Model
NASA Astrophysics Data System (ADS)
Huang, H. Y.; Zhang, J. P.; Li, Y. H.
2018-02-01
Based on the measured data of truck from three artery expressways in Guangdong Province, the statistical analysis of truck weight was conducted according to axle number. The standard fatigue vehicle model applied to industrial areas in the middle and late was obtained, which adopted equivalence damage principle, Miner linear accumulation law, water discharge method and damage ratio theory. Compared with the fatigue vehicle model Specified by the current bridge design code, the proposed model has better applicability. It is of certain reference value for the fatigue design of bridge in China.
Dynamical properties of maps fitted to data in the noise-free limit
Lindström, Torsten
2013-01-01
We argue that any attempt to classify dynamical properties from nonlinear finite time-series data requires a mechanistic model fitting the data better than piecewise linear models according to standard model selection criteria. Such a procedure seems necessary but still not sufficient. PMID:23768079
Proposing an Educational Scaling-and-Diffusion Model for Inquiry-Based Learning Designs
ERIC Educational Resources Information Center
Hung, David; Lee, Shu-Shing
2015-01-01
Education cannot adopt the linear model of scaling used by the medical sciences. "Gold standards" cannot be replicated without considering process-in-learning, diversity, and student-variedness in classrooms. This article proposes a nuanced model of educational scaling-and-diffusion, describing the scaling (top-down supports) and…
Seaman, Shaun R; White, Ian R; Carpenter, James R
2015-01-01
Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation. Imputation of partially observed covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of multiple imputation may impute covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing multiple imputation, can be modified so that covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it with existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. Stata software implementing the approach is freely available. PMID:24525487
Bayesian Correction for Misclassification in Multilevel Count Data Models.
Nelson, Tyler; Song, Joon Jin; Chin, Yoo-Mi; Stamey, James D
2018-01-01
Covariate misclassification is well known to yield biased estimates in single level regression models. The impact on hierarchical count models has been less studied. A fully Bayesian approach to modeling both the misclassified covariate and the hierarchical response is proposed. Models with a single diagnostic test and with multiple diagnostic tests are considered. Simulation studies show the ability of the proposed model to appropriately account for the misclassification by reducing bias and improving performance of interval estimators. A real data example further demonstrated the consequences of ignoring the misclassification. Ignoring misclassification yielded a model that indicated there was a significant, positive impact on the number of children of females who observed spousal abuse between their parents. When the misclassification was accounted for, the relationship switched to negative, but not significant. Ignoring misclassification in standard linear and generalized linear models is well known to lead to biased results. We provide an approach to extend misclassification modeling to the important area of hierarchical generalized linear models.
Coarse-grained description of cosmic structure from Szekeres models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sussman, Roberto A.; Gaspar, I. Delgado; Hidalgo, Juan Carlos, E-mail: sussman@nucleares.unam.mx, E-mail: ismael.delgadog@uaem.edu.mx, E-mail: hidalgo@fis.unam.mx
2016-03-01
We show that the full dynamical freedom of the well known Szekeres models allows for the description of elaborated 3-dimensional networks of cold dark matter structures (over-densities and/or density voids) undergoing ''pancake'' collapse. By reducing Einstein's field equations to a set of evolution equations, which themselves reduce in the linear limit to evolution equations for linear perturbations, we determine the dynamics of such structures, with the spatial comoving location of each structure uniquely specified by standard early Universe initial conditions. By means of a representative example we examine in detail the density contrast, the Hubble flow and peculiar velocities ofmore » structures that evolved, from linear initial data at the last scattering surface, to fully non-linear 10–20 Mpc scale configurations today. To motivate further research, we provide a qualitative discussion on the connection of Szekeres models with linear perturbations and the pancake collapse of the Zeldovich approximation. This type of structure modelling provides a coarse grained—but fully relativistic non-linear and non-perturbative —description of evolving large scale cosmic structures before their virialisation, and as such it has an enormous potential for applications in cosmological research.« less
Nonstandard neutrino self-interactions in a supernova and fast flavor conversions
NASA Astrophysics Data System (ADS)
Dighe, Amol; Sen, Manibrata
2018-02-01
We study the effects of nonstandard self-interactions (NSSI) of neutrinos streaming out of a core-collapse supernova. We show that with NSSI, the standard linear stability analysis gives rise to linearly as well as exponentially growing solutions. For a two-box spectrum, we demonstrate analytically that flavor-preserving NSSI lead to a suppression of bipolar collective oscillations. In the intersecting four-beam model, we show that flavor-violating NSSI can lead to fast oscillations even when the angle between the neutrino and antineutrino beams is obtuse, which is forbidden in the standard model. This leads to the new possibility of fast oscillations in a two-beam system with opposing neutrino-antineutrino fluxes, even in the absence of any spatial inhomogeneities. Finally, we solve the full nonlinear equations of motion in the four-beam model numerically, and explore the interplay of fast and slow flavor conversions in the long-time behavior, in the presence of NSSI.
2014-01-01
Summary This paper presents computational simulations of single-mode and bimodal atomic force microscopy (AFM) with particular focus on the viscoelastic interactions occurring during tip–sample impact. The surface is modeled by using a standard linear solid model, which is the simplest system that can reproduce creep compliance and stress relaxation, which are fundamental behaviors exhibited by viscoelastic surfaces. The relaxation of the surface in combination with the complexities of bimodal tip–sample impacts gives rise to unique dynamic behaviors that have important consequences with regards to the acquisition of quantitative relationships between the sample properties and the AFM observables. The physics of the tip–sample interactions and its effect on the observables are illustrated and discussed, and a brief research outlook on viscoelasticity measurement with intermittent-contact AFM is provided. PMID:25383277
ERIC Educational Resources Information Center
Fulmer, Gavin W.; Polikoff, Morgan S.
2014-01-01
An essential component in school accountability efforts is for assessments to be well-aligned with the standards or curriculum they are intended to measure. However, relatively little prior research has explored methods to determine statistical significance of alignment or misalignment. This study explores analyses of alignment as a special case…
The Use of Structure Coefficients to Address Multicollinearity in Sport and Exercise Science
ERIC Educational Resources Information Center
Yeatts, Paul E.; Barton, Mitch; Henson, Robin K.; Martin, Scott B.
2017-01-01
A common practice in general linear model (GLM) analyses is to interpret regression coefficients (e.g., standardized ß weights) as indicators of variable importance. However, focusing solely on standardized beta weights may provide limited or erroneous information. For example, ß weights become increasingly unreliable when predictor variables are…
NASA Astrophysics Data System (ADS)
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe, T.; et al.
This Resource Book reviews the physics opportunities of a next-generation e+e- linear collider and discusses options for the experimental program. Part 3 reviews the possible experiments on that can be done at a linear collider on strongly coupled electroweak symmetry breaking, exotic particles, and extra dimensions, and on the top quark, QCD, and two-photon physics. It also discusses the improved precision electroweak measurements that this collider will make available.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data
Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-01-01
Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Xu, Enhua; Ten-No, Seiichiro L
2018-06-05
Partially linearized external models to active-space coupled-cluster through hextuple excitations, for example, CC{SDtqph} L , CCSD{tqph} L , and CCSD{tqph} hyb, are implemented and compared with the full active-space CCSDtqph. The computational scaling of CCSDtqph coincides with that for the standard coupled-cluster singles and doubles (CCSD), yet with a much large prefactor. The approximate schemes to linearize the external excitations higher than doubles are significantly cheaper than the full CCSDtqph model. These models are applied to investigate the bond dissociation energies of diatomic molecules (HF, F 2 , CuH, and CuF), and the potential energy surfaces of the bond dissociation processes of HF, CuH, H 2 O, and C 2 H 4 . Among the approximate models, CCSD{tqph} hyb provides very accurate descriptions compared with CCSDtqph for all of the tested systems. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Applications of nonlinear systems theory to control design
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Villarreal, Ramiro
1988-01-01
For most applications in the control area, the standard practice is to approximate a nonlinear mathematical model by a linear system. Since the feedback linearizable systems contain linear systems as a subclass, the procedure of approximating a nonlinear system by a feedback linearizable one is examined. Because many physical plants (e.g., aircraft at the NASA Ames Research Center) have mathematical models which are close to feedback linearizable systems, such approximations are certainly justified. Results and techniques are introduced for measuring the gap between the model and its truncated linearizable part. The topic of pure feedback systems is important to the study.
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
Anomalous dielectric relaxation with linear reaction dynamics in space-dependent force fields.
Hong, Tao; Tang, Zhengming; Zhu, Huacheng
2016-12-28
The anomalous dielectric relaxation of disordered reaction with linear reaction dynamics is studied via the continuous time random walk model in the presence of space-dependent electric field. Two kinds of modified reaction-subdiffusion equations are derived for different linear reaction processes by the master equation, including the instantaneous annihilation reaction and the noninstantaneous annihilation reaction. If a constant proportion of walkers is added or removed instantaneously at the end of each step, there will be a modified reaction-subdiffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps, there will be a standard linear reaction kinetics term but a fractional order temporal derivative operating on an anomalous diffusion term. The dielectric polarization is analyzed based on the Legendre polynomials and the dielectric properties of both reactions can be expressed by the effective rotational diffusion function and component concentration function, which is similar to the standard reaction-diffusion process. The results show that the effective permittivity can be used to describe the dielectric properties in these reactions if the chemical reaction time is much longer than the relaxation time.
NASA Astrophysics Data System (ADS)
Bertolesi, Elisa; Milani, Gabriele; Poggi, Carlo
2016-12-01
Two FE modeling techniques are presented and critically discussed for the non-linear analysis of tuff masonry panels reinforced with FRCM and subjected to standard diagonal compression tests. The specimens, tested at the University of Naples (Italy), are unreinforced and FRCM retrofitted walls. The extensive characterization of the constituent materials allowed adopting here very sophisticated numerical modeling techniques. In particular, here the results obtained by means of a micro-modeling strategy and homogenization approach are compared. The first modeling technique is a tridimensional heterogeneous micro-modeling where constituent materials (bricks, joints, reinforcing mortar and reinforcing grid) are modeled separately. The second approach is based on a two-step homogenization procedure, previously developed by the authors, where the elementary cell is discretized by means of three-noded plane stress elements and non-linear interfaces. The non-linear structural analyses are performed replacing the homogenized orthotropic continuum with a rigid element and non-linear spring assemblage (RBSM). All the simulations here presented are performed using the commercial software Abaqus. Pros and cons of the two approaches are herein discussed with reference to their reliability in reproducing global force-displacement curves and crack patterns, as well as to the rather different computational effort required by the two strategies.
Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P
2017-06-13
We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.
Ignition-and-Growth Modeling of NASA Standard Detonator and a Linear Shaped Charge
NASA Technical Reports Server (NTRS)
Oguz, Sirri
2010-01-01
The main objective of this study is to quantitatively investigate the ignition and shock sensitivity of NASA Standard Detonator (NSD) and the shock wave propagation of a linear shaped charge (LSC) after being shocked by NSD flyer plate. This combined explosive train was modeled as a coupled Arbitrary Lagrangian-Eulerian (ALE) model with LS-DYNA hydro code. An ignition-and-growth (I&G) reactive model based on unreacted and reacted Jones-Wilkins-Lee (JWL) equations of state was used to simulate the shock initiation. Various NSD-to-LSC stand-off distances were analyzed to calculate the shock initiation (or failure to initiate) and detonation wave propagation along the shaped charge. Simulation results were verified by experimental data which included VISAR tests for NSD flyer plate velocity measurement and an aluminum target severance test for LSC performance verification. Parameters used for the analysis were obtained from various published data or by using CHEETAH thermo-chemical code.
Double elementary Goldstone Higgs boson production in future linear colliders
NASA Astrophysics Data System (ADS)
Guo, Yu-Chen; Yue, Chong-Xing; Liu, Zhi-Cheng
2018-03-01
The Elementary Goldstone Higgs (EGH) model is a perturbative extension of the Standard Model (SM), which identifies the EGH boson as the observed Higgs boson. In this paper, we study pair production of the EGH boson in future linear electron positron colliders. The cross-sections in the TeV region can be changed to about ‑27%, 163% and ‑34% for the e+e‑→ Zhh, e+e‑→ νν¯hh and e+e‑→ tt¯hh processes with respect to the SM predictions, respectively. According to the expected measurement precisions, such correction effects might be observed in future linear colliders. In addition, we compare the cross-sections of double SM-like Higgs boson production with the predictions in other new physics models.
The linear sizes tolerances and fits system modernization
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.
2018-04-01
The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.
Wood, Scott T; Dean, Brian C; Dean, Delphine
2013-04-01
This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
We present a pedagogical systematic investigation of the accuracy of Eulerian and Lagrangian perturbation theories of large-scale structure. We show that significant differences exist between them especially when trying to model the Baryon Acoustic Oscillations (BAO). We find that the best available model of the BAO in real space is the Zel'dovich Approximation (ZA), giving an accuracy of ∼<3% at redshift of z = 0 in modelling the matter 2-pt function around the acoustic peak. All corrections to the ZA around the BAO scale are perfectly perturbative in real space. Any attempt to achieve better precision requires calibrating the theorymore » to simulations because of the need to renormalize those corrections. In contrast, theories which do not fully preserve the ZA as their solution, receive O(1) corrections around the acoustic peak in real space at z = 0, and are thus of suspicious convergence at low redshift around the BAO. As an example, we find that a similar accuracy of 3% for the acoustic peak is achieved by Eulerian Standard Perturbation Theory (SPT) at linear order only at z ≈ 4. Thus even when SPT is perturbative, one needs to include loop corrections for z∼<4 in real space. In Fourier space, all models perform similarly, and are controlled by the overdensity amplitude, thus recovering standard results. However, that comes at a price. Real space cleanly separates the BAO signal from non-linear dynamics. In contrast, Fourier space mixes signal from short mildly non-linear scales with the linear signal from the BAO to the level that non-linear contributions from short scales dominate. Therefore, one has little hope in constructing a systematic theory for the BAO in Fourier space.« less
London Measure of Unplanned Pregnancy: guidance for its use as an outcome measure
Hall, Jennifer A; Barrett, Geraldine; Copas, Andrew; Stephenson, Judith
2017-01-01
Background The London Measure of Unplanned Pregnancy (LMUP) is a psychometrically validated measure of the degree of intention of a current or recent pregnancy. The LMUP is increasingly being used worldwide, and can be used to evaluate family planning or preconception care programs. However, beyond recommending the use of the full LMUP scale, there is no published guidance on how to use the LMUP as an outcome measure. Ordinal logistic regression has been recommended informally, but studies published to date have all used binary logistic regression and dichotomized the scale at different cut points. There is thus a need for evidence-based guidance to provide a standardized methodology for multivariate analysis and to enable comparison of results. This paper makes recommendations for the regression method for analysis of the LMUP as an outcome measure. Materials and methods Data collected from 4,244 pregnant women in Malawi were used to compare five regression methods: linear, logistic with two cut points, and ordinal logistic with either the full or grouped LMUP score. The recommendations were then tested on the original UK LMUP data. Results There were small but no important differences in the findings across the regression models. Logistic regression resulted in the largest loss of information, and assumptions were violated for the linear and ordinal logistic regression. Consequently, robust standard errors were used for linear regression and a partial proportional odds ordinal logistic regression model attempted. The latter could only be fitted for grouped LMUP score. Conclusion We recommend the linear regression model with robust standard errors to make full use of the LMUP score when analyzed as an outcome measure. Ordinal logistic regression could be considered, but a partial proportional odds model with grouped LMUP score may be required. Logistic regression is the least-favored option, due to the loss of information. For logistic regression, the cut point for un/planned pregnancy should be between nine and ten. These recommendations will standardize the analysis of LMUP data and enhance comparability of results across studies. PMID:28435343
NASA Technical Reports Server (NTRS)
Jermey, C.; Schiff, L. B.
1985-01-01
A series of wind-tunnel tests have been conducted on the Standard Dynamics Model (a simplified generic fighter-aircraft shape) undergoing coning motion at Mach 0.6. Six-component force and moment data are presented for a range of angles of attack, sideslip and coning rates. At the relatively low nondimensional coning rates employed, the lateral aerodynamic charactersitics generally show a linear variation with coning rate.
Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang
2014-10-01
Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
Drug awareness in adolescents attending a mental health service: analysis of longitudinal data.
Arnau, Jaume; Bono, Roser; Díaz, Rosa; Goti, Javier
2011-11-01
One of the procedures used most recently with longitudinal data is linear mixed models. In the context of health research the increasing number of studies that now use these models bears witness to the growing interest in this type of analysis. This paper describes the application of linear mixed models to a longitudinal study of a sample of Spanish adolescents attending a mental health service, the aim being to investigate their knowledge about the consumption of alcohol and other drugs. More specifically, the main objective was to compare the efficacy of a motivational interviewing programme with a standard approach to drug awareness. The models used to analyse the overall indicator of drug awareness were as follows: (a) unconditional linear growth curve model; (b) growth model with subject-associated variables; and (c) individual curve model with predictive variables. The results showed that awareness increased over time and that the variable 'schooling years' explained part of the between-subjects variation. The effect of motivational interviewing was also significant.
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...
Toward an Educational View of Scaling: Sufficing Standard and Not a Gold Standard
ERIC Educational Resources Information Center
Hung, David; Lee, Shu-Shing; Wu, Longkai
2015-01-01
Educational innovations in Singapore have reached fruition. It is now important to consider different innovations and issues that enable innovations to scale and become widespread. This proposition paper outlines two views of scaling and its relation to education systems. We argue that a linear model used in the medical field stresses top-down…
Quantitative photoacoustic imaging in the acoustic regime using SPIM
NASA Astrophysics Data System (ADS)
Beigl, Alexander; Elbau, Peter; Sadiq, Kamran; Scherzer, Otmar
2018-05-01
While in standard photoacoustic imaging the propagation of sound waves is modeled by the standard wave equation, our approach is based on a generalized wave equation with variable sound speed and material density, respectively. In this paper we present an approach for photoacoustic imaging, which in addition to the recovery of the absorption density parameter, the imaging parameter of standard photoacoustics, also allows us to reconstruct the spatially varying sound speed and density, respectively, of the medium. We provide analytical reconstruction formulas for all three parameters based in a linearized model based on single plane illumination microscopy (SPIM) techniques.
Incorporating inductances in tissue-scale models of cardiac electrophysiology
NASA Astrophysics Data System (ADS)
Rossi, Simone; Griffith, Boyce E.
2017-09-01
In standard models of cardiac electrophysiology, including the bidomain and monodomain models, local perturbations can propagate at infinite speed. We address this unrealistic property by developing a hyperbolic bidomain model that is based on a generalization of Ohm's law with a Cattaneo-type model for the fluxes. Further, we obtain a hyperbolic monodomain model in the case that the intracellular and extracellular conductivity tensors have the same anisotropy ratio. In one spatial dimension, the hyperbolic monodomain model is equivalent to a cable model that includes axial inductances, and the relaxation times of the Cattaneo fluxes are strictly related to these inductances. A purely linear analysis shows that the inductances are negligible, but models of cardiac electrophysiology are highly nonlinear, and linear predictions may not capture the fully nonlinear dynamics. In fact, contrary to the linear analysis, we show that for simple nonlinear ionic models, an increase in conduction velocity is obtained for small and moderate values of the relaxation time. A similar behavior is also demonstrated with biophysically detailed ionic models. Using the Fenton-Karma model along with a low-order finite element spatial discretization, we numerically analyze differences between the standard monodomain model and the hyperbolic monodomain model. In a simple benchmark test, we show that the propagation of the action potential is strongly influenced by the alignment of the fibers with respect to the mesh in both the parabolic and hyperbolic models when using relatively coarse spatial discretizations. Accurate predictions of the conduction velocity require computational mesh spacings on the order of a single cardiac cell. We also compare the two formulations in the case of spiral break up and atrial fibrillation in an anatomically detailed model of the left atrium, and we examine the effect of intracellular and extracellular inductances on the virtual electrode phenomenon.
H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.
Liu, Meiqin; Zhang, Senlin; Chen, Haiyang; Sheng, Weihua
2014-10-01
This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedback controller is analyzed using Lyapunov-Krasovskii stability theorem and linear matrix inequality (LMI) approach. The H∞ controller, of which the parameters are obtained by solving LMIs, guarantees that the output of the closed-loop system closely tracks the output of a given reference model well, and reduces the influence of disturbances on the tracking error. Three numerical examples are provided to show the effectiveness of the proposed H∞ output tracking design approach.
Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.
Shao, Lijing
2014-03-21
The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
NASA Astrophysics Data System (ADS)
Belich, H.; Bakke, K.
2015-07-01
We start by investigating the arising of a spin-orbit coupling and a Darwin-type term that stem from Lorentz symmetry breaking effects in the CPT-odd sector of the Standard Model Extension. Then, we establish a possible scenario of the violation of the Lorentz symmetry that gives rise to a linear confining potential and an effective electric field in which determines the spin-orbit coupling for a neutral particle analogous to the Rashba coupling [E. I. Rashba, Sov. Phys. Solid State 2, 1109 (1960)]. Finally, we confine the neutral particle to a quantum dot [W.-C. Tan and J. C. Inkson, Semicond. Sci. Technol. 11, 1635 (1996)] and analyze the influence of the linear confining potential and the spin-orbit coupling on the spectrum of energy.
Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame
NASA Astrophysics Data System (ADS)
Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.
2013-12-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.
Global Reference Atmosphere Model (GRAM)
NASA Technical Reports Server (NTRS)
Johnson, D. L.; Blocker, Rhonda; Justus, C. G.
1993-01-01
4D model provides atmospheric parameter values either automatically at positions along linear path or along any set of connected positions specified by user. Based on actual data, GRAM provides thermal wind shear for monthly mean winds, percent deviation from standard atmosphere, mean vertical wind, and perturbation data for each position.
40 CFR 86.094-16 - Prohibition of defeat devices.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Provisions for Emission Regulations for 1977 and Later Model Year New Light-Duty Vehicles, Light-Duty Trucks and Heavy-Duty Engines, and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled... congruity across the intermediate temperature range is the linear interpolation between the CO standard...
Competing regression models for longitudinal data.
Alencar, Airlane P; Singer, Julio M; Rocha, Francisco Marcelo M
2012-03-01
The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretest-posttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Aerodynamic characteristics of the standard dynamics model in coning motion at Mach 0.6
NASA Technical Reports Server (NTRS)
Jermey, C.; Schiff, L. B.
1985-01-01
A wind tunnel test was conducted on the Standard Dynamics Model (a simplified generic fighter aircraft shape) undergoing coning motion at Mach 0.6. Six component force and moment data are presented for a range of angle of attack, sideslip, and coning rates. At the relatively low non-dimensional coning rate employed (omega b/2V less than or equal to 0.04), the lateral aerodynamic characteristics generally show a linear variation with coning rate.
Finite linear diffusion model for design of overcharge protection for rechargeable lithium batteries
NASA Technical Reports Server (NTRS)
Narayanan, S. R.; Surampudi, S.; Attia, A. I.
1991-01-01
The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. The model has been experimentally verified using 1,1-prime-dimethylferrocene as a redox additive. The theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.
NASA Technical Reports Server (NTRS)
Burns, John A.; Marrekchi, Hamadi
1993-01-01
The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.
NASA Astrophysics Data System (ADS)
Monjo, R.
2017-11-01
Most of current cosmological theories are built combining an isotropic and homogeneous manifold with a scale factor that depends on time. If one supposes a hyperconical universe with linear expansion, an inhomogeneous metric can be obtained by an appropriate transformation that preserves the proper time. This model locally tends to a flat Friedman-Robertson-Walker metric with linear expansion. The objective of this work is to analyze the observational compatibility of the inhomogeneous metric considered. For this purpose, the corresponding luminosity distance was obtained and was compared with the observations of 580 SNe Ia, taken from the Supernova Cosmology Project. The best fit of the hyperconical model obtains χ02=562 , the same value as the standard Λ CDM model. Finally, a possible relationship is found between both theories.
Biological effects and equivalent doses in radiotherapy: A software solution
Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline
2013-01-01
Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319
NASA Technical Reports Server (NTRS)
Huang, L. C. P.; Cook, R. A.
1973-01-01
Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.
Study of non-linear deformation of vocal folds in simulations of human phonation
NASA Astrophysics Data System (ADS)
Saurabh, Shakti; Bodony, Daniel
2014-11-01
Direct numerical simulation is performed on a two-dimensional compressible, viscous fluid interacting with a non-linear, viscoelastic solid as a model for the generation of the human voice. The vocal fold (VF) tissues are modeled as multi-layered with varying stiffness in each layer and using a finite-strain Standard Linear Solid (SLS) constitutive model implemented in a quadratic finite element code and coupled to a high-order compressible Navier-Stokes solver through a boundary-fitted fluid-solid interface. The large non-linear mesh deformation is handled using an elliptic/poisson smoothening technique. Supra-glottal flow shows asymmetry in the flow, which in turn has a coupling effect on the motion of the VF. The fully compressible simulations gives direct insight into the sound produced as pressure distributions and the vocal fold deformation helps study the unsteady vortical flow resulting from the fluid-structure interaction along the full phonation cycle. Supported by the National Science Foundation (CAREER Award Number 1150439).
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
Statistical inference for template aging
NASA Astrophysics Data System (ADS)
Schuckers, Michael E.
2006-04-01
A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.
Adaptive Filtering Using Recurrent Neural Networks
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Non-linear assessment and deficiency of linear relationship for healthcare industry
NASA Astrophysics Data System (ADS)
Nordin, N.; Abdullah, M. M. A. B.; Razak, R. C.
2017-09-01
This paper presents the development of the non-linear service satisfaction model that assumes patients are not necessarily satisfied or dissatisfied with good or poor service delivery. With that, compliment and compliant assessment is considered, simultaneously. Non-linear service satisfaction instrument called Kano-Q and Kano-SS is developed based on Kano model and Theory of Quality Attributes (TQA) to define the unexpected, hidden and unspoken patient satisfaction and dissatisfaction into service quality attribute. A new Kano-Q and Kano-SS algorithm for quality attribute assessment is developed based satisfaction impact theories and found instrumentally fit the reliability and validity test. The results were also validated based on standard Kano model procedure before Kano model and Quality Function Deployment (QFD) is integrated for patient attribute and service attribute prioritization. An algorithm of Kano-QFD matrix operation is developed to compose the prioritized complaint and compliment indexes. Finally, the results of prioritized service attributes are mapped to service delivery category to determine the most prioritized service delivery that need to be improved at the first place by healthcare service provider.
Xu, Wenhai; Que Hee, Shane S
2006-01-06
The aim of this study was to identify and quantify an unknown peak in the chromatogram of a very complex mixture, a straight oil metalworking fluid (MWF). The fraction that permeated through a thin nitrile polymer membrane had less mineral oil background than the original MWF did at the retention time of the unknown peak, thus facilitating identification by total ion current (TIC) gas chromatography-mass spectrometry (GC-MS). The peak proved to be di-n-octyl disulfide (DOD) through retention time and mass spectral comparisons. Quantitation of DOD was by extracted ion chromatogram analysis of the DOD molecular ion (mass-to-charge ratio (m/z) 290), and of the m/z 71 ion for the internal standard, n-triacontane. Linear models of the area ratio (y) of these two ions versus DOD concentration showed a systematic negative bias at low concentrations, a common occurrence in analysis. The linear model of y(0.8) (from Box-Cox power transformation) versus DOD concentration showed negligible bias from the lowest measured standard of 1.51 mg/L to the highest concentration tested at 75.5 mg/L. The intercept did not differ statistically from zero. The concentration of DOD in the MWF was then calculated to be 0.398+/-0.034% (w/w) by the internal standard method, and 0.387+/-0.036% (w/w) by the method of standard additions. These two results were not significantly different at p < or = 0.05. The Box-Cox transformation is therefore recommended when the data for standards are non-linear.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
General Model of Photon-Pair Detection with an Image Sensor
NASA Astrophysics Data System (ADS)
Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.
2018-05-01
We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.
Model for threading dislocations in metamorphic tandem solar cells on GaAs (001) substrates
NASA Astrophysics Data System (ADS)
Song, Yifei; Kujofsa, Tedi; Ayers, John E.
2018-02-01
We present an approximate model for the threading dislocations in III-V heterostructures and have applied this model to study the defect behavior in metamorphic triple-junction solar cells. This model represents a new approach in which the coefficient for second-order threading dislocation annihilation and coalescence reactions is considered to be determined by the length of misfit dislocations, LMD, in the structure, and we therefore refer to it as the LMD model. On the basis of this model we have compared the average threading dislocation densities in the active layers of triple junction solar cells using linearly-graded buffers of varying thicknesses as well as S-graded (complementary error function) buffers with varying thicknesses and standard deviation parameters. We have shown that the threading dislocation densities in the active regions of metamorphic tandem solar cells depend not only on the thicknesses of the buffer layers but on their compositional grading profiles. The use of S-graded buffer layers instead of linear buffers resulted in lower threading dislocation densities. Moreover, the threading dislocation densities depended strongly on the standard deviation parameters used in the S-graded buffers, with smaller values providing lower threading dislocation densities.
In response to the new, size-discriminate federal standards for Inhalable Particulate Matter, the Regional Lagrangian Model of Air Pollution (RELMAP) has been modified to include simple, linear parameterizations. As an initial step in the possible refinement, RELMAP has been subj...
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
Statistical method to compare massive parallel sequencing pipelines.
Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P
2017-03-01
Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.
Nonlinear resonances in linear segmented Paul trap of short central segment.
Kłosowski, Łukasz; Piwiński, Mariusz; Pleskacz, Katarzyna; Wójtewicz, Szymon; Lisak, Daniel
2018-03-23
Linear segmented Paul trap system has been prepared for ion mass spectroscopy experiments. A non-standard approach to stability of trapped ions is applied to explain some effects observed with ensembles of calcium ions. Trap's stability diagram is extended to 3-dimensional one using additional ∆a besides standard q and a stability parameters. Nonlinear resonances in (q,∆a) diagrams are observed and described with a proposed model. The resonance lines have been identified using simple simulations and comparing the numerical and experimental results. The phenomenon can be applied in electron-impact ionization experiments for mass-identification of obtained ions or purification of their ensembles. This article is protected by copyright. All rights reserved.
2013-01-01
Background This study aims to improve accuracy of Bioelectrical Impedance Analysis (BIA) prediction equations for estimating fat free mass (FFM) of the elderly by using non-linear Back Propagation Artificial Neural Network (BP-ANN) model and to compare the predictive accuracy with the linear regression model by using energy dual X-ray absorptiometry (DXA) as reference method. Methods A total of 88 Taiwanese elderly adults were recruited in this study as subjects. Linear regression equations and BP-ANN prediction equation were developed using impedances and other anthropometrics for predicting the reference FFM measured by DXA (FFMDXA) in 36 male and 26 female Taiwanese elderly adults. The FFM estimated by BIA prediction equations using traditional linear regression model (FFMLR) and BP-ANN model (FFMANN) were compared to the FFMDXA. The measuring results of an additional 26 elderly adults were used to validate than accuracy of the predictive models. Results The results showed the significant predictors were impedance, gender, age, height and weight in developed FFMLR linear model (LR) for predicting FFM (coefficient of determination, r2 = 0.940; standard error of estimate (SEE) = 2.729 kg; root mean square error (RMSE) = 2.571kg, P < 0.001). The above predictors were set as the variables of the input layer by using five neurons in the BP-ANN model (r2 = 0.987 with a SD = 1.192 kg and relatively lower RMSE = 1.183 kg), which had greater (improved) accuracy for estimating FFM when compared with linear model. The results showed a better agreement existed between FFMANN and FFMDXA than that between FFMLR and FFMDXA. Conclusion When compared the performance of developed prediction equations for estimating reference FFMDXA, the linear model has lower r2 with a larger SD in predictive results than that of BP-ANN model, which indicated ANN model is more suitable for estimating FFM. PMID:23388042
NASA Astrophysics Data System (ADS)
Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto
2000-12-01
The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.
Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.
2009-01-01
The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk
We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shownmore » to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.« less
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry I.; Kasimov, Aslan R.
2018-03-01
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
First direct constraints on Fierz interference in free-neutron β decay
NASA Astrophysics Data System (ADS)
Hickerson, K. P.; Sun, X.; Bagdasarova, Y.; Bravo-Berguño, D.; Broussard, L. J.; Brown, M. A.-P.; Carr, R.; Currie, S.; Ding, X.; Filippone, B. W.; García, A.; Geltenbort, P.; Hoagland, J.; Holley, A. T.; Hong, R.; Ito, T. M.; Knecht, A.; Liu, C.-Y.; Liu, J. L.; Makela, M.; Mammei, R. R.; Martin, J. W.; Melconian, D.; Mendenhall, M. P.; Moore, S. D.; Morris, C. L.; Pattie, R. W.; Pérez Galván, A.; Picker, R.; Pitt, M. L.; Plaster, B.; Ramsey, J. C.; Rios, R.; Saunders, A.; Seestrom, S. J.; Sharapov, E. I.; Sondheim, W. E.; Tatar, E.; Vogelaar, R. B.; VornDick, B.; Wrede, C.; Young, A. R.; Zeck, B. A.; UCNA Collaboration
2017-10-01
Precision measurements of free-neutron β decay have been used to precisely constrain our understanding of the weak interaction. However, the neutron Fierz interference term bn, which is particularly sensitive to beyond-standard-model tensor currents at the TeV scale, has thus far eluded measurement. Here we report the first direct constraints on this term, finding bn=0.067 ±0 .005stat-0.061+0.090sys , consistent with the standard model. The uncertainty is dominated by absolute energy reconstruction and the linearity of the β spectrometer energy response.
Stochastic models for atomic clocks
NASA Technical Reports Server (NTRS)
Barnes, J. A.; Jones, R. H.; Tryon, P. V.; Allan, D. W.
1983-01-01
For the atomic clocks used in the National Bureau of Standards Time Scales, an adequate model is the superposition of white FM, random walk FM, and linear frequency drift for times longer than about one minute. The model was tested on several clocks using maximum likelihood techniques for parameter estimation and the residuals were acceptably random. Conventional diagnostics indicate that additional model elements contribute no significant improvement to the model even at the expense of the added model complexity.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
Human Language Technology: Opportunities and Challenges
2005-01-01
because of the connections to and reliance on signal processing. Audio diarization critically includes indexing of speakers [12], since speaker ...to reduce inter- speaker variability in training. Standard techniques include vocal-tract length normalization, adaptation of acoustic models using...maximum likelihood linear regression (MLLR), and speaker -adaptive training based on MLLR. The acoustic models are mixtures of Gaussians, typically with
Nallikuzhy, Jiss J; Dandapat, S
2017-06-01
In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig
2016-10-01
To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.
2016-08-17
The S-matrix of a quantum field theory is unchanged by field redefinitions, and so it only depends on geometric quantities such as the curvature of field space. Whether the Higgs multiplet transforms linearly or non-linearly under electroweak symmetry is a subtle question since one can make a coordinate change to convert a field that transforms linearly into one that transforms non-linearly. Renormalizability of the Standard Model (SM) does not depend on the choice of scalar fields or whether the scalar fields transform linearly or non-linearly under the gauge group, but only on the geometric requirement that the scalar field manifoldmore » M is flat. Standard Model Effective Field Theory (SMEFT) and Higgs Effective Field Theory (HEFT) have curved M, since they parametrize deviations from the flat SM case. We show that the HEFT Lagrangian can be written in SMEFT form if and only ifMhas a SU(2) L U(1) Y invariant fixed point. Experimental observables in HEFT depend on local geometric invariants of M such as sectional curvatures, which are of order 1/Λ 2 , where Λ is the EFT scale. We give explicit expressions for these quantities in terms of the structure constants for a general G → H symmetry breaking pattern. The one-loop radiative correction in HEFT is determined using a covariant expansion which preserves manifest invariance of M under coordinate redefinitions. The formula for the radiative correction is simple when written in terms of the curvature of M and the gauge curvature field strengths. We also extend the CCWZ formalism to non-compact groups, and generalize the HEFT curvature computation to the case of multiple singlet scalar fields.« less
NASA Astrophysics Data System (ADS)
Mimasu, Ken; Sanz, Verónica; Williams, Ciaran
2016-08-01
We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casarini, L.; Bonometto, S.A.; Tessarotto, E.
2016-08-01
We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less
Growth rate in the dynamical dark energy models.
Avsajanishvili, Olga; Arkhipova, Natalia A; Samushia, Lado; Kahniashvili, Tina
Dark energy models with a slowly rolling cosmological scalar field provide a popular alternative to the standard, time-independent cosmological constant model. We study the simultaneous evolution of background expansion and growth in the scalar field model with the Ratra-Peebles self-interaction potential. We use recent measurements of the linear growth rate and the baryon acoustic oscillation peak positions to constrain the model parameter [Formula: see text] that describes the steepness of the scalar field potential.
Statistical approach to Higgs boson couplings in the standard model effective field theory
NASA Astrophysics Data System (ADS)
Murphy, Christopher W.
2018-01-01
We perform a parameter fit in the standard model effective field theory (SMEFT) with an emphasis on using regularized linear regression to tackle the issue of the large number of parameters in the SMEFT. In regularized linear regression, a positive definite function of the parameters of interest is added to the usual cost function. A cross-validation is performed to try to determine the optimal value of the regularization parameter to use, but it selects the standard model (SM) as the best model to explain the measurements. Nevertheless as proof of principle of this technique we apply it to fitting Higgs boson signal strengths in SMEFT, including the latest Run-2 results. Results are presented in terms of the eigensystem of the covariance matrix of the least squares estimators as it has a degree model-independent to it. We find several results in this initial work: the SMEFT predicts the total width of the Higgs boson to be consistent with the SM prediction; the ATLAS and CMS experiments at the LHC are currently sensitive to non-resonant double Higgs boson production. Constraints are derived on the viable parameter space for electroweak baryogenesis in the SMEFT, reinforcing the notion that a first order phase transition requires fairly low-scale beyond the SM physics. Finally, we study which future experimental measurements would give the most improvement on the global constraints on the Higgs sector of the SMEFT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.
Mihalaş, Stefan; Niebur, Ernst
2009-03-01
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.
Wind Characterization for the Assessment of Collision Risk During Flight Level Changes
NASA Technical Reports Server (NTRS)
Carreno, Victor; Chartrand, Ryan
2009-01-01
A model of vertical wind gradient is presented based on National Oceanic and Atmospheric Administration (NOAA) wind data. The objective is to have an accurate representation of wind to be used in Collision Risk Models (CRM) of aircraft procedures. Depending on how an aircraft procedure is defined, wind and the different characteristics of the wind will have a more severe or less severe impact on distances between aircraft. For the In-Trail Procedure, the non-linearity of the vertical wind gradient has the greatest impact on longitudinal distance. The analysis in this paper extracts standard deviation, mean, maximum, and linearity characteristics from the NOAA data.
Connecting dark matter annihilation to the vertex functions of Standard Model fermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Jason; Light, Christopher, E-mail: jkumar@hawaii.edu, E-mail: lightc@hawaii.edu
We consider scenarios in which dark matter is a Majorana fermion which couples to Standard Model fermions through the exchange of charged mediating particles. The matrix elements for various dark matter annihilation processes are then related to one-loop corrections to the fermion-photon vertex, where dark matter and the charged mediators run in the loop. In particular, in the limit where Standard Model fermion helicity mixing is suppressed, the cross section for dark matter annihilation to various final states is related to corrections to the Standard Model fermion charge form factor. These corrections can be extracted in a gauge-invariant manner frommore » collider cross sections. Although current measurements from colliders are not precise enough to provide useful constraints on dark matter annihilation, improved measurements at future experiments, such as the International Linear Collider, could improve these constraints by several orders of magnitude, allowing them to surpass the limits obtainable by direct observation.« less
Multicollinearity in hierarchical linear models.
Yu, Han; Jiang, Shanhe; Land, Kenneth C
2015-09-01
This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.
Extracting falsifiable predictions from sloppy models.
Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P
2007-12-01
Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amai, W.; Espinoza, J. Jr.; Fletcher, D.R.
1997-06-01
This Software Requirements Specification (SRS) describes the features to be provided by the software for the GIS-T/ISTEA Pooled Fund Study Phase C Linear Referencing Engine project. This document conforms to the recommendations of IEEE Standard 830-1984, IEEE Guide to Software Requirements Specification (Institute of Electrical and Electronics Engineers, Inc., 1984). The software specified in this SRS is a proof-of-concept implementation of the Linear Referencing Engine as described in the GIS-T/ISTEA pooled Fund Study Phase B Summary, specifically Sheet 13 of the Phase B object model. The software allows an operator to convert between two linear referencing methods and a datummore » network.« less
Bowen, Stephen R; Chappell, Richard J; Bentzen, Søren M; Deveau, Michael A; Forrest, Lisa J; Jeraj, Robert
2012-01-01
Purpose To quantify associations between pre-radiotherapy and post-radiotherapy PET parameters via spatially resolved regression. Materials and methods Ten canine sinonasal cancer patients underwent PET/CT scans of [18F]FDG (FDGpre), [18F]FLT (FLTpre), and [61Cu]Cu-ATSM (Cu-ATSMpre). Following radiotherapy regimens of 50 Gy in 10 fractions, veterinary patients underwent FDG PET/CT scans at three months (FDGpost). Regression of standardized uptake values in baseline FDGpre, FLTpre and Cu-ATSMpre tumour voxels to those in FDGpost images was performed for linear, log-linear, generalized-linear and mixed-fit linear models. Goodness-of-fit in regression coefficients was assessed by R2. Hypothesis testing of coefficients over the patient population was performed. Results Multivariate linear model fits of FDGpre to FDGpost were significantly positive over the population (FDGpost~0.17 FDGpre, p=0.03), and classified slopes of RECIST non-responders and responders to be different (0.37 vs. 0.07, p=0.01). Generalized-linear model fits related FDGpre to FDGpost by a linear power law (FDGpost~FDGpre0.93, p<0.001). Univariate mixture model fits of FDGpre improved R2 from 0.17 to 0.52. Neither baseline FLT PET nor Cu-ATSM PET uptake contributed statistically significant multivariate regression coefficients. Conclusions Spatially resolved regression analysis indicates that pre-treatment FDG PET uptake is most strongly associated with three-month post-treatment FDG PET uptake in this patient population, though associations are histopathology-dependent. PMID:22682748
A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows
NASA Technical Reports Server (NTRS)
Montgomery, Matthew D.; Verdon, Joseph M.
1997-01-01
A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic responses of axial-flow turbo-machinery blading.The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. A numerical model for linearized inviscid unsteady flows, which couples a near-field, implicit, wave-split, finite volume analysis to a far-field eigenanalysis, is also described. The linearized aerodynamic and numerical models have been implemented into a three-dimensional linearized unsteady flow code, called LINFLUX. This code has been applied to selected, benchmark, unsteady, subsonic flows to establish its accuracy and to demonstrate its current capabilities. The unsteady flows considered, have been chosen to allow convenient comparisons between the LINFLUX results and those of well-known, two-dimensional, unsteady flow codes. Detailed numerical results for a helical fan and a three-dimensional version of the 10th Standard Cascade indicate that important progress has been made towards the development of a reliable and useful, three-dimensional, prediction capability that can be used in aeroelastic and aeroacoustic design studies.
Røislien, Jo; Lossius, Hans Morten; Kristiansen, Thomas
2015-01-01
Background Trauma is a leading global cause of death. Trauma mortality rates are higher in rural areas, constituting a challenge for quality and equality in trauma care. The aim of the study was to explore population density and transport time to hospital care as possible predictors of geographical differences in mortality rates, and to what extent choice of statistical method might affect the analytical results and accompanying clinical conclusions. Methods Using data from the Norwegian Cause of Death registry, deaths from external causes 1998–2007 were analysed. Norway consists of 434 municipalities, and municipality population density and travel time to hospital care were entered as predictors of municipality mortality rates in univariate and multiple regression models of increasing model complexity. We fitted linear regression models with continuous and categorised predictors, as well as piecewise linear and generalised additive models (GAMs). Models were compared using Akaike's information criterion (AIC). Results Population density was an independent predictor of trauma mortality rates, while the contribution of transport time to hospital care was highly dependent on choice of statistical model. A multiple GAM or piecewise linear model was superior, and similar, in terms of AIC. However, while transport time was statistically significant in multiple models with piecewise linear or categorised predictors, it was not in GAM or standard linear regression. Conclusions Population density is an independent predictor of trauma mortality rates. The added explanatory value of transport time to hospital care is marginal and model-dependent, highlighting the importance of exploring several statistical models when studying complex associations in observational data. PMID:25972600
Genetics-based control of a mimo boiler-turbine plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.M.; Lee, K.Y.
1994-12-31
A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.
Higgs, SUSY and the standard model at /γγ colliders
NASA Astrophysics Data System (ADS)
Hagiwara, Kaoru
2001-10-01
In this report, I surveyed physics potential of the γγ option of a linear e +e - collider with the following questions in mind: What new discovery can be expected at a γγ collider in addition to what will be learned at its ' parent' e +e -linear collider? By taking account of the hard energy spectrum and polarization of colliding photons, produced by Compton back-scattering of laser light off incoming e - beams, we find that a γγ collider is most powerful when new physics appears in the neutral spin-zero channel at an invariant mass below about 80% of the c.m. energy of the colliding e -e - system. If a light Higgs boson exists, its properties can be studied in detail, and if its heavier partners or a heavy Higgs boson exists in the above mass range, they may be discovered at a γγ collider. CP property of the scalar sector can be explored in detail by making use of linear polarization of the colliding photons, decay angular correlations of final state particles, and the pattern of interference with the Standard Model amplitudes. A few comments are given for SUSY particle studies at a γγ collider, where a pair of charged spinless particles is produced in the s-wave near the threshold. Squark-onium may be discovered. An e ±γ collision mode may measure the Higgs- Z-γ coupling accurately and probe flavor oscillations in the slepton sector. As a general remark, all the Standard Model background simulation tools should be prepared in the helicity amplitude level, so that simulation can be performed for an arbitrary set of Stokes parameters of the incoming photon beams.
ERIC Educational Resources Information Center
Chu, Man-Wai; Babenko, Oksana; Cui, Ying; Leighton, Jacqueline P.
2014-01-01
The study examines the role that perceptions or impressions of learning environments and assessments play in students' performance on a large-scale standardized test. Hierarchical linear modeling (HLM) was used to test aspects of the Learning Errors and Formative Feedback model to determine how much variation in students' performance was explained…
Experimental demonstration of nonbilocal quantum correlations.
Saunders, Dylan J; Bennet, Adam J; Branciard, Cyril; Pryde, Geoff J
2017-04-01
Quantum mechanics admits correlations that cannot be explained by local realistic models. The most studied models are the standard local hidden variable models, which satisfy the well-known Bell inequalities. To date, most works have focused on bipartite entangled systems. We consider correlations between three parties connected via two independent entangled states. We investigate the new type of so-called "bilocal" models, which correspondingly involve two independent hidden variables. These models describe scenarios that naturally arise in quantum networks, where several independent entanglement sources are used. Using photonic qubits, we build such a linear three-node quantum network and demonstrate nonbilocal correlations by violating a Bell-like inequality tailored for bilocal models. Furthermore, we show that the demonstration of nonbilocality is more noise-tolerant than that of standard Bell nonlocality in our three-party quantum network.
Influence of wave modelling on the prediction of fatigue for offshore wind turbines
NASA Astrophysics Data System (ADS)
Veldkamp, H. F.; van der Tempel, J.
2005-01-01
Currently it is standard practice to use Airy linear wave theory combined with Morison's formula for the calculation of fatigue loads for offshore wind turbines. However, offshore wind turbines are typically placed in relatively shallow water depths of 5-25 m where linear wave theory has limited accuracy and where ideally waves generated with the Navier-Stokes approach should be used. This article examines the differences in fatigue for some representative offshore wind turbines that are found if first-order, second-order and fully non-linear waves are used. The offshore wind turbines near Blyth are located in an area where non-linear wave effects are common. Measurements of these waves from the OWTES project are used to compare the different wave models with the real world in spectral form. Some attention is paid to whether the shape of a higher-order wave height spectrum (modified JONSWAP) corresponds to reality for other places in the North Sea, and which values for the drag and inertia coefficients should be used. Copyright
NASA Astrophysics Data System (ADS)
Lion, Alexander; Mittermeier, Christoph; Johlitz, Michael
2017-09-01
A novel approach to represent the glass transition is proposed. It is based on a physically motivated extension of the linear viscoelastic Poynting-Thomson model. In addition to a temperature-dependent damping element and two linear springs, two thermal strain elements are introduced. In order to take the process dependence of the specific heat into account and to model its characteristic behaviour below and above the glass transition, the Helmholtz free energy contains an additional contribution which depends on the temperature history and on the current temperature. The model describes the process-dependent volumetric and caloric behaviour of glass-forming materials, and defines a functional relationship between pressure, volumetric strain, and temperature. If a model for the isochoric part of the material behaviour is already available, for example a model of finite viscoelasticity, the caloric and volumetric behaviour can be represented with the current approach. The proposed model allows computing the isobaric and isochoric heat capacities in closed form. The difference c_p -c_v is process-dependent and tends towards the classical expression in the glassy and equilibrium ranges. Simulations and theoretical studies demonstrate the physical significance of the model.
A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors
Mihalaş, Ştefan; Niebur, Ernst
2010-01-01
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368
Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.
2008-12-01
To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.
NASA Astrophysics Data System (ADS)
Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan
2017-11-01
Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.
NASA Technical Reports Server (NTRS)
Howard, Joseph M.; Ha, Kong Q.
2004-01-01
This is part two of a series on the optical modeling activities for JWST. Starting with the linear optical model discussed in part one, we develop centroid and wavefront error sensitivities for the special case of a segmented optical system such as JWST, where the primary mirror consists of 18 individual segments. Our approach extends standard sensitivity matrix methods used for systems consisting of monolithic optics, where the image motion is approximated by averaging ray coordinates at the image and residual wavefront error is determined with global tip/tilt removed. We develop an exact formulation using the linear optical model, and extend it to cover multiple field points for performance prediction at each instrument aboard JWST. This optical model is then driven by thermal and dynamic structural perturbations in an integrated modeling environment. Results are presented.
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
Nguyen, N H; Whatmore, P; Miller, A; Knibb, W
2016-02-01
The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.
Baqué, Michèle; Amendt, Jens
2013-01-01
Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.
Modeling for CO poisoning of a fuel cell anode
NASA Technical Reports Server (NTRS)
Dhar, H. P.; Kush, A. K.; Patel, D. N.; Christner, L. G.
1986-01-01
Poisoning losses in a half-cell in the 110-190 C temperature range have been measured in 100 wt pct H3PO4 for various mixtures of H2, CO, and CO2 gases in order to investigate the polarization loss due to poisoning by CO of a porous fuel cell Pt anode. At a fixed current density, the poisoning loss was found to vary linearly with ln of the CO/H2 concentration ratio, although deviations from linearity were noted at lower temperatures and higher current densities for high CO/H2 concentration ratios. The surface coverages of CO were also found to vary linearly with ln of the CO/H2 concentration ratio. A general adsorption relationship is derived. Standard free energies for CO adsorption were found to vary from -14.5 to -12.1 kcal/mol in the 130-190 C temperature range. The standard entropy for CO adsorption was found to be -39 cal/mol per deg K.
Real-Time Detector of Human Fatigue: Detecting Lapses in Alertness
2008-02-15
These coefficients and their variances, covariances and standard errors were computed simultaneously using HLM 6 (Raudenbush, Bryk, Cheong, & Congdon ...CA: Sage. Raudenbush, S. W., Bryk, A. S., Cheong, Y. F., & Congdon , R. T. (2004). HLM6: Hierarchical Linear and Nonlinear Modeling [Computer software
Per capita invasion probabilities: A linear model to predict rates of invasion via ballast water
Ballast water discharges are a major source of species introductions into marine, estuarine, and freshwater ecosystems. To mitigate the introduction of new invaders into these ecosystems, many agencies are proposing discharge standards that establish upper concentration limits f...
NASA Astrophysics Data System (ADS)
Ramírez-Sánchez, F.; Gutierrez-Rodríguez, A.; Hernández-Ruiz, M. A.
2017-10-01
We study the phenomenology of the light h and heavy H Higgs boson production and decay in the context of a U(1) B - L extension of the standard model with an additional Z´ boson at future e + e - linear colliders with center-of-mass energies of √𝑠 = 500 - 3000 GeV and integrated luminosities of L = 500 - 2000 fb-1. The study includes the processes e + e - → (Z, Z´) → Zh and e + e - → (Z, Z´) → ZH, considering both the resonant and non-resonant effects. We find that the total number of expected Zh and ZH events can reach 106 and 105, respectively, which is a very optimistic scenario allowing us to perform precision measurements for both Higgs bosons h and H, as well as for the Z‧ boson in future high-energy and high-luminosity e + e - colliders.
Hou, Zhifei; Sun, Guoxiang; Guo, Yong
2016-01-01
The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard.
Admassu, Bitiya; Ritz, Christian; Wells, Jonathan C K; Girma, Tsinuel; Andersen, Gregers S; Belachew, Tefera; Owino, Victor; Michaelsen, Kim F; Abera, Mubarek; Wibaek, Rasmus; Friis, Henrik; Kæstel, Pernille
2018-04-01
We have previously shown that fat-free mass (FFM) at birth is associated with height at 2 y of age in Ethiopian children. However, to our knowledge, the relation between changes in body composition during early infancy and later linear growth has not been studied. This study examined the associations of early infancy fat mass (FM) and FFM accretion with linear growth from 1 to 5 y of age in Ethiopian children. In the infant Anthropometry and Body Composition (iABC) study, a prospective cohort study was carried out in children in Jimma, Ethiopia, followed from birth to 5 y of age. FM and FFM were measured ≤6 times from birth to 6 mo by using air-displacement plethysmography. Linear mixed-effects models were used to identify associations between standardized FM and FFM accretion rates during early infancy and linear growth from 1 to 5 y of age. Standardized accretion rates were obtained by dividing FM and FFM accretion by their respective SD. FFM accretion from 0 to 6 mo of age was positively associated with length at 1 y (β = 0.64; 95% CI: 0.19, 1.09; P = 0.005) and linear growth from 1 to 5 y (β = 0.63; 95% CI: 0.19, 1.07; P = 0.005). The strongest association with FFM accretion was observed at 1 y. The association with linear growth from 1 to 5 y was mainly engendered by the 1-y association. FM accretion from 0 to 4 mo was positively associated with linear growth from 1 to 5 y (β = 0.45; 95% CI: 0.02, 0.88; P = 0.038) in the fully adjusted model. In Ethiopian children, FFM accretion was associated with linear growth at 1 y and no clear additional longitudinal effect from 1 to 5 y was observed. FM accretion showed a weak association from 1 to 5 y. This trial was registered at www.controlled-trials.com as ISRCTN46718296.
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Development of a Linearized Unsteady Euler Analysis with Application to Wake/Blade-Row Interactions
NASA Technical Reports Server (NTRS)
Verdon, Joseph M.; Montgomery, Matthew D.; Chuang, H. Andrew
1999-01-01
A three-dimensional, linearized, Euler analysis is being developed to provide a comprehensive and efficient unsteady aerodynamic analysis for predicting the aeroacoustic and aeroelastic responses of axial-flow turbomachinery blading. The mathematical models needed to describe nonlinear and linearized, inviscid, unsteady flows through a blade row operating within a cylindrical annular duct are presented in this report. A numerical model for linearized inviscid unsteady flows, which couples a near-field, implicit, wave-split, finite volume analysis to far-field eigen analyses, is also described. The linearized aerodynamic and numerical models have been implemented into the three-dimensional unsteady flow code, LINFLUX. This code is applied herein to predict unsteady subsonic flows driven by wake or vortical excitations. The intent is to validate the LINFLUX analysis via numerical results for simple benchmark unsteady flows and to demonstrate this analysis via application to a realistic wake/blade-row interaction. Detailed numerical results for a three-dimensional version of the 10th Standard Cascade and a fan exit guide vane indicate that LINFLUX is becoming a reliable and useful unsteady aerodynamic prediction capability that can be applied, in the future, to assess the three-dimensional flow physics important to blade-row, aeroacoustic and aeroelastic responses.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Tensor-GMRES method for large sparse systems of nonlinear equations
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1994-01-01
This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.
A methodology for design of a linear referencing system for surface transportation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vonderohe, A.; Hepworth, T.
1997-06-01
The transportation community has recently placed significant emphasis on development of data models, procedural standards, and policies for management of linearly-referenced data. There is an Intelligent Transportation Systems initiative underway to create a spatial datum for location referencing in one, two, and three dimensions. Most recently, a call was made for development of a unified linear reference system to support public, private, and military surface transportation needs. A methodology for design of the linear referencing system was developed from geodetic engineering principles and techniques used for designing geodetic control networks. The method is founded upon the law of propagation ofmore » random error and the statistical analysis of systems of redundant measurements, used to produce best estimates for unknown parameters. A complete mathematical development is provided. Example adjustments of linear distance measurement systems are included. The classical orders of design are discussed with regard to the linear referencing system. A simple design example is provided. A linear referencing system designed and analyzed with this method will not only be assured of meeting the accuracy requirements of users, it will have the potential for supporting delivery of error estimates along with the results of spatial analytical queries. Modeling considerations, alternative measurement methods, implementation strategies, maintenance issues, and further research needs are discussed. Recommendations are made for further advancement of the unified linear referencing system concept.« less
Efficient model learning methods for actor-critic control.
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
2012-06-01
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
Data Combination and Instrumental Variables in Linear Models
ERIC Educational Resources Information Center
Khawand, Christopher
2012-01-01
Instrumental variables (IV) methods allow for consistent estimation of causal effects, but suffer from poor finite-sample properties and data availability constraints. IV estimates also tend to have relatively large standard errors, often inhibiting the interpretability of differences between IV and non-IV point estimates. Lastly, instrumental…
Suppression Situations in Multiple Linear Regression
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…
Optimal design of focused experiments and surveys
NASA Astrophysics Data System (ADS)
Curtis, Andrew
1999-10-01
Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.
SAMPA: A free software tool for skin and membrane permeation data analysis.
Bezrouk, Aleš; Fiala, Zdeněk; Kotingová, Lenka; Krulichová, Iva Selke; Kopečná, Monika; Vávrová, Kateřina
2017-10-01
Skin and membrane permeation experiments comprise an important step in the development of a transdermal or topical formulation or toxicological risk assessment. The standard method for analyzing these data relies on the linear part of a permeation profile. However, it is difficult to objectively determine when the profile becomes linear, or the experiment duration may be insufficient to reach a maximum or steady state. Here, we present a software tool for Skin And Membrane Permeation data Analysis, SAMPA, that is easy to use and overcomes several of these difficulties. The SAMPA method and software have been validated on in vitro and in vivo permeation data on human, pig and rat skin and model stratum corneum lipid membranes using compounds that range from highly lipophilic polycyclic aromatic hydrocarbons to highly hydrophilic antiviral drug, with and without two permeation enhancers. The SAMPA performance was compared with the standard method using a linear part of the permeation profile and a complex mathematical model. SAMPA is a user-friendly, open-source software tool for analyzing the data obtained from skin and membrane permeation experiments. It runs on a Microsoft Windows platform and is freely available as a Supporting file to this article. Copyright © 2017 Elsevier Ltd. All rights reserved.
Conceptual problems in detecting the evolution of dark energy when using distance measurements
NASA Astrophysics Data System (ADS)
Bolejko, K.
2011-01-01
Context. Dark energy is now one of the most important and topical problems in cosmology. The first step to reveal its nature is to detect the evolution of dark energy or to prove beyond doubt that the cosmological constant is indeed constant. However, in the standard approach to cosmology, the Universe is described by the homogeneous and isotropic Friedmann models. Aims: We aim to show that in the perturbed universe (even if perturbations vanish if averaged over sufficiently large scales) the distance-redshift relation is not the same as in the unperturbed universe. This has a serious consequence when studying the nature of dark energy and, as shown here, can impair the analysis and studies of dark energy. Methods: The analysis is based on two methods: the linear lensing approximation and the non-linear Szekeres Swiss-Cheese model. The inhomogeneity scale is ~50 Mpc, and both models have the same density fluctuations along the line of sight. Results: The comparison between linear and non-linear methods shows that non-linear corrections are not negligible. When inhomogeneities are present the distance changes by several percent. To show how this change influences the measurements of dark energy, ten future observations with 2% uncertainties are generated. It is shown the using the standard methods (i.e. under the assumption of homogeneity) the systematics due to inhomogeneities can distort our analysis, and may lead to a conclusion that dark energy evolves when in fact it is constant (or vice versa). Conclusions: Therefore, if future observations are analysed only within the homogeneous framework then the impact of inhomogeneities (such as voids and superclusters) can be mistaken for evolving dark energy. Since the robust distinction between the evolution and non-evolution of dark energy is the first step to understanding the nature of dark energy a proper handling of inhomogeneities is essential.
Fukuda, Hiromu; Maunder, Mark N.
2017-01-01
Catch-per-unit-effort (CPUE) is often the main piece of information used in fisheries stock assessment; however, the catch and effort data that are traditionally compiled from commercial logbooks can be incomplete or unreliable due to many reasons. Pacific bluefin tuna (PBF) is a seasonal target species in the Taiwanese longline fishery. Since 2010, detailed catch information for each PBF has been made available through a catch documentation scheme. However, previously, only market landing data with a low coverage of logbooks were available. Therefore, several nontraditional procedures were performed to reconstruct catch and effort data from many alternative data sources not directly obtained from fishers for 2001–2015: (1) Estimating the catch number from the landing weight for 2001–2003, for which the catch number information was incomplete, based on Monte Carlo simulation; (2) deriving fishing days for 2007–2009 from voyage data recorder data, based on a newly developed algorithm; and (3) deriving fishing days for 2001–2006 from vessel trip information, based on linear relationships between fishing and at-sea days. Subsequently, generalized linear mixed models were developed with the delta-lognormal assumption for standardizing the CPUE calculated from the reconstructed data, and three-stage model evaluation was performed using (1) Akaike and Bayesian information criteria to determine the most favorable variable composition of standardization models, (2) overall R2 via cross-validation to compare fitting performance between area-separated and area-combined standardizations, and (3) system-based testing to explore the consistency of the standardized CPUEs with auxiliary data in the PBF stock assessment model. The last stage of evaluation revealed high consistency among the data, thus demonstrating improvements in data reconstruction for estimating the abundance index, and consequently the stock assessment. PMID:28968434
Repopulation Kinetics and the Linear-Quadratic Model
NASA Astrophysics Data System (ADS)
O'Rourke, S. F. C.; McAneney, H.; Starrett, C.; O'Sullivan, J. M.
2009-08-01
The standard Linear-Quadratic (LQ) survival model for radiotherapy is used to investigate different schedules of radiation treatment planning for advanced head and neck cancer. We explore how these treament protocols may be affected by different tumour repopulation kinetics between treatments. The laws for tumour cell repopulation include the logistic and Gompertz models and this extends the work of Wheldon et al. [1], which was concerned with the case of exponential repopulation between treatments. Treatment schedules investigated include standarized and accelerated fractionation. Calculations based on the present work show, that even with growth laws scaled to ensure that the repopulation kinetics for advanced head and neck cancer are comparable, considerable variation in the survival fraction to orders of magnitude emerged. Calculations show that application of the Gompertz model results in a significantly poorer prognosis for tumour eradication. Gaps in treatment also highlight the differences in the LQ model with the effect of repopulation kinetics included.
ERIC Educational Resources Information Center
Malloch, Douglas C.; Michael, William B.
1981-01-01
This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…
NASA Astrophysics Data System (ADS)
Lu, Yanrong; Liao, Fucheng; Deng, Jiamei; Liu, Huiyang
2017-09-01
This paper investigates the cooperative global optimal preview tracking problem of linear multi-agent systems under the assumption that the output of a leader is a previewable periodic signal and the topology graph contains a directed spanning tree. First, a type of distributed internal model is introduced, and the cooperative preview tracking problem is converted to a global optimal regulation problem of an augmented system. Second, an optimal controller, which can guarantee the asymptotic stability of the augmented system, is obtained by means of the standard linear quadratic optimal preview control theory. Third, on the basis of proving the existence conditions of the controller, sufficient conditions are given for the original problem to be solvable, meanwhile a cooperative global optimal controller with error integral and preview compensation is derived. Finally, the validity of theoretical results is demonstrated by a numerical simulation.
New insights into soil temperature time series modeling: linear or nonlinear?
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram
2018-03-01
Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.
Mapping the Dark Matter with 6dFGS
NASA Astrophysics Data System (ADS)
Mould, Jeremy R.; Magoulas, C.; Springob, C.; Colless, M.; Jones, H.; Lucey, J.; Erdogdu, P.; Campbell, L.
2012-05-01
Fundamental plane distances from the 6dF Galaxy Redshift Survey are fitted to a model of the density field within 200/h Mpc. Likelihood is maximized for a single value of the local galaxy density, as expected in linear theory for the relation between overdensity and peculiar velocity. The dipole of the inferred southern hemisphere early type galaxy peculiar velocities is calculated within 150/h Mpc, before and after correction for the individual galaxy velocities predicted by the model. The former agrees with that obtained by other peculiar velocity studies (e.g. SFI++). The latter is only of order 150 km/sec and consistent with the expectations of the standard cosmological model and recent forecasts of the cosmic mach number, which show linearly declining bulk flow with increasing scale.
Mirror Instability: Quasi-linear Effects
NASA Astrophysics Data System (ADS)
Hellinger, P.; Travnicek, P. M.; Passot, T.; Sulem, P.; Kuznetsov, E. A.
2008-12-01
Nonlinear properties of the mirror instability are investigated by direct integration of the quasi-linear diffusion equation [Shapiro and Shevchenko, 1964] near threshold. The simulation results are compared to the results of standard hybrid simulations [Califano et al., 2008] and discussed in the context of the nonlinear dynamical model by Kuznetsov et al. [2007]. References: Califano, F., P. Hellinger, E. Kuznetsov, T. Passot, P. L. Sulem, and P. M. Travnicek (2008), Nonlinear mirror mode dynamics: Simulations and modeling, J. Geophys. Res., 113, A08219, doi:10.1029/2007JA012898. Kuznetsov, E., T. Passot and P. L. Sulem (2007), Dynamical model for nonlinear mirror modes near threshold, Phys. Rev. Lett., 98, 235003 . Shapiro, V. D., and V. I. Shevchenko (1964), Quasilinear theory of instability of a plasma with an anisotropic ion velocity distribution, Sov. JETP, 18, 1109.
Experimental demonstration of nonbilocal quantum correlations
Saunders, Dylan J.; Bennet, Adam J.; Branciard, Cyril; Pryde, Geoff J.
2017-01-01
Quantum mechanics admits correlations that cannot be explained by local realistic models. The most studied models are the standard local hidden variable models, which satisfy the well-known Bell inequalities. To date, most works have focused on bipartite entangled systems. We consider correlations between three parties connected via two independent entangled states. We investigate the new type of so-called “bilocal” models, which correspondingly involve two independent hidden variables. These models describe scenarios that naturally arise in quantum networks, where several independent entanglement sources are used. Using photonic qubits, we build such a linear three-node quantum network and demonstrate nonbilocal correlations by violating a Bell-like inequality tailored for bilocal models. Furthermore, we show that the demonstration of nonbilocality is more noise-tolerant than that of standard Bell nonlocality in our three-party quantum network. PMID:28508045
Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.
Dropkin, Greg
2016-11-24
Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.
Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J
2016-11-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.
2016-01-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647
Ernst, Anja F; Albers, Casper J
2017-01-01
Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.
Lifting primordial non-Gaussianity above the noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welling, Yvette; Woude, Drian van der; Pajer, Enrico, E-mail: welling@strw.leidenuniv.nl, E-mail: D.C.vanderWoude@uu.nl, E-mail: enrico.pajer@gmail.com
2016-08-01
Primordial non-Gaussianity (PNG) in Large Scale Structures is obfuscated by the many additional sources of non-linearity. Within the Effective Field Theory approach to Standard Perturbation Theory, we show that matter non-linearities in the bispectrum can be modeled sufficiently well to strengthen current bounds with near future surveys, such as Euclid. We find that the EFT corrections are crucial to this improvement in sensitivity. Yet, our understanding of non-linearities is still insufficient to reach important theoretical benchmarks for equilateral PNG, while, for local PNG, our forecast is more optimistic. We consistently account for the theoretical error intrinsic to the perturbative approachmore » and discuss the details of its implementation in Fisher forecasts.« less
Ernst, Anja F.
2017-01-01
Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971
Certification of a hybrid parameter model of the fully flexible Shuttle Remote Manipulator System
NASA Technical Reports Server (NTRS)
Barhorst, Alan A.
1995-01-01
The development of high fidelity models of mechanical systems with flexible components is in flux. Many working models of these devices assume the elastic motion is small and can be superimposed on the overall rigid body motion. A drawback associated with this type of modeling technique is that it is required to regenerate the linear modal model of the device if the elastic motion is sufficiently far from the base rigid motion. An advantage to this type of modeling is that it uses NASTRAN modal data which is the NASA standard means of modal information exchange. A disadvantage to the linear modeling is that it fails to accurately represent large motion of the system, unless constant modal updates are performed. In this study, which is a continuation of a project started last year, the drawback of the currently used modal snapshot modeling technique is addressed in a rigorous fashion by novel and easily applied means.
Data Modeling Using Finite Differences
ERIC Educational Resources Information Center
Rhoads, Kathryn; Mendoza Epperson, James A.
2017-01-01
The Common Core State Standards for Mathematics (CCSSM) states that high school students should be able to recognize patterns of growth in linear, quadratic, and exponential functions and construct such functions from tables of data (CCSSI 2010). In their work with practicing secondary teachers, the authors found that teachers may make some tacit…
Social Inequality and Labor Force Participation.
ERIC Educational Resources Information Center
King, Jonathan
The labor force participation rates of whites, blacks, and Spanish-Americans, grouped by sex, are explained in a linear regression model fitted with 1970 U. S. Census data on Standard Metropolitan Statistical Area (SMSA). The explanatory variables are: average age, average years of education, vocational training rate, disabled rate, unemployment…
Decipipes: Helping Students to "Get the Point"
ERIC Educational Resources Information Center
Moody, Bruce
2011-01-01
Decipipes are a representational model that can be used to help students develop conceptual understanding of decimal place value. They provide a non-standard tool for representing length, which in turn can be represented using conventional decimal notation. They are conceptually identical to Linear Arithmetic Blocks. This article reviews theory…
Aggression and Adaptive Functioning: The Bright Side to Bad Behavior.
ERIC Educational Resources Information Center
Hawley, Patricia H.; Vaughn, Brian E.
2003-01-01
Asserts that effective children and adolescents can engage in socially undesirable behavior to attain personal goals at relatively little personal or interpersonal cost, implying that relations between adjustment and aggression may not be optimally described by standard linear models. Suggests that if researchers recognize that some aggression…
The Jukes-Cantor Model of Molecular Evolution
ERIC Educational Resources Information Center
Erickson, Keith
2010-01-01
The material in this module introduces students to some of the mathematical tools used to examine molecular evolution. This topic is standard fare in many mathematical biology or bioinformatics classes, but could also be suitable for classes in linear algebra or probability. While coursework in matrix algebra, Markov processes, Monte Carlo…
Non-LTE profiles of strong solar lines
NASA Technical Reports Server (NTRS)
Schneeberger, T. J.; Beebe, H. A.
1976-01-01
The complete linearization method is applied to the formation of strong lines in the solar atmosphere. Transitions in Na(I), Mg(I), Ca(I), Mg(II), and Ca(II) are computed with a standard atmosphere and microturbulent velocity model. The computed profiles are compared to observations at disk center.
Accelerating Recovery from Poverty: Prevention Effects for Recently Separated Mothers
ERIC Educational Resources Information Center
Forgatch, Marion S.; DeGarmo, David S.
2007-01-01
This study evaluated benefits of a preventive intervention to the living standards of recently separated mothers. In the Oregon Divorce Study's randomized experimental design, data were collected 5 times over 30 months and evaluated with Hierarchical Linear Growth Models. Relative to their no-intervention control counterparts, experimental mothers…
A Graphical Approach to the Standard Principal-Agent Model.
ERIC Educational Resources Information Center
Zhou, Xianming
2002-01-01
States the principal-agent theory is difficult to teach because of its technical complexity and intractability. Indicates the equilibrium in the contract space is defined by the incentive parameter and insurance component of pay under a linear contract. Describes a graphical approach that students with basic knowledge of algebra and…
NASA Astrophysics Data System (ADS)
Landry, Brian R.; Subotnik, Joseph E.
2011-11-01
We evaluate the accuracy of Tully's surface hopping algorithm for the spin-boson model for the case of a small diabatic coupling parameter (V). We calculate the transition rates between diabatic surfaces, and we compare our results to the expected Marcus rates. We show that standard surface hopping yields an incorrect scaling with diabatic coupling (linear in V), which we demonstrate is due to an incorrect treatment of decoherence. By modifying standard surface hopping to include decoherence events, we recover the correct scaling (˜V2).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldaya, V.; Lopez-Ruiz, F. F.; Sanchez-Sastre, E.
2006-11-03
We reformulate the gauge theory of interactions by introducing the gauge group parameters into the model. The dynamics of the new 'Goldstone-like' bosons is accomplished through a non-linear {sigma}-model Lagrangian. They are minimally coupled according to a proper prescription which provides mass terms to the intermediate vector bosons without spoiling gauge invariance. The present formalism is explicitly applied to the Standard Model of electroweak interactions.
Optimization of detectors for the ILC
NASA Astrophysics Data System (ADS)
Suehara, Taikan; ILD Group; SID Group
2016-04-01
International Linear Collider (ILC) is a next-generation e+e- linear collider to explore Higgs, Beyond-Standard-Models, top and electroweak particles with great precision. We are optimizing our two detectors, International Large Detector (ILD) and Silicon Detector (SiD) to maximize the physics reach expected in ILC with reasonable detector cost and good reliability. The optimization study on vertex detectors, main trackers and calorimeters is underway. We aim to conclude the optimization to establish final designs in a few years, to finish detector TDR and proposal in reply to expected ;green sign; of the ILC project.
Robust Computation of Linear Models, or How to Find a Needle in a Haystack
2012-02-17
robustly, project it onto a sphere, and then apply standard PCA. This approach is due to [LMS+99]. Maronna et al . [MMY06] recommend it as a preferred...of this form is due to Chandrasekaran et al . [CSPW11]. Given an observed matrix X, they propose to solve the semidefinite problem minimize ‖P ‖S1 + γ...regularization parameter γ negotiates a tradeoff between the two goals. Candès et al . [CLMW11] study the performance of (2.1) for robust linear
Dynamics of DNA/intercalator complexes
NASA Astrophysics Data System (ADS)
Schurr, J. M.; Wu, Pengguang; Fujimoto, Bryant S.
1990-05-01
Complexes of linear and supercoiled DNAs with different intercalating dyes are studied by time-resolved fluorescence polarization anisotropy using intercalated ethidium as the probe. Existing theory is generalized to take account of excitation transfer between intercalated ethidiums, and Forster theory is shown to be valid in this context. The effects of intercalated ethidium, 9-aminoacridine, and proflavine on the torsional rigidity of linear and supercoiled DNAs are studied up to rather high binding ratios. Evidence is presented that metastable secondary structure persists in dye-relaxed supercoiled DNAs, which contradicts the standard model of supercoiled DNAs.
Assessing the performance of eight real-time updating models and procedures for the Brosna River
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.
2005-10-01
The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.
NASA Technical Reports Server (NTRS)
Campbell, J. W.
1973-01-01
A stochasitc model of the atmosphere between 30 and 90 km was developed for use in Monte Carlo space shuttle entry studies. The model is actually a family of models, one for each latitude-season category as defined in the 1966 U.S. Standard Atmosphere Supplements. Each latitude-season model generates a pseudo-random temperature profile whose mean is the appropriate temperature profile from the Standard Atmosphere Supplements. The standard deviation of temperature at each altitude for a given latitude-season model was estimated from sounding-rocket data. Departures from the mean temperature at each altitude were produced by assuming a linear regression of temperature on the solar heating rate of ozone. A profile of random ozone concentrations was first generated using an auxiliary stochastic ozone model, also developed as part of this study, and then solar heating rates were computed for the random ozone concentrations.
Vocal fold tissue failure: preliminary data and constitutive modeling.
Chan, Roger W; Siegmund, Thomas
2004-08-01
In human voice production (phonation), linear small-amplitude vocal fold oscillation occurs only under restricted conditions. Physiologically, phonation more often involves large-amplitude oscillation associated with tissue stresses and strains beyond their linear viscoelastic limits, particularly in the lamina propria extracellular matrix (ECM). This study reports some preliminary measurements of tissue deformation and failure response of the vocal fold ECM under large-strain shear The primary goal was to formulate and test a novel constitutive model for vocal fold tissue failure, based on a standard-linear cohesive-zone (SL-CZ) approach. Tissue specimens of the sheep vocal fold mucosa were subjected to torsional deformation in vitro, at constant strain rates corresponding to twist rates of 0.01, 0.1, and 1.0 rad/s. The vocal fold ECM demonstrated nonlinear stress-strain and rate-dependent failure response with a failure strain as low as 0.40 rad. A finite-element implementation of the SL-CZ model was capable of capturing the rate dependence in these preliminary data, demonstrating the model's potential for describing tissue failure. Further studies with additional tissue specimens and model improvements are needed to better understand vocal fold tissue failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allanach, B
2004-03-01
The work contained herein constitutes a report of the ''Beyond the Standard Model'' working group for the Workshop ''Physics at TeV Colliders'', Les Houches, France, 26 May-6 June, 2003. The research presented is original, and was performed specifically for the workshop. Tools for calculations in the minimal supersymmetric standard model are presented, including a comparison of the dark matter relic density predicted by public codes. Reconstruction of supersymmetric particle masses at the LHC and a future linear collider facility is examined. Less orthodox supersymmetric signals such as non-pointing photons and R-parity violating signals are studied. Features of extra dimensional modelsmore » are examined next, including measurement strategies for radions and Higgs', as well as the virtual effects of Kaluza Klein modes of gluons. Finally, there is an update on LHC Z' studies.« less
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.
2013-01-01
The gas holdup time (tM) is a dominant parameter in gas chromatographic retention models. The difference equation (DE) model proposed by Wu et al. (J. Chromatogr. A 2012, http://dx.doi.org/10.1016/j.chroma.2012.07.077) excluded tM. In the present paper, we propose that the relationship between the adjusted retention time tRZ′ and carbon number z of n-alkanes follows a quadratic equation (QE) when an accurate tM is obtained. This QE model is the same as or better than the DE model for an accurate expression of the retention behavior of n-alkanes and model applications. The QE model covers a larger range of n-alkanes with better curve fittings than the linear model. The accuracy of the QE model was approximately 2–6 times better than the DE model and 18–540 times better than the LE model. Standard deviations of the QE model were approximately 2–3 times smaller than those of the DE model. PMID:22989489
Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence
NASA Astrophysics Data System (ADS)
Lynn, Jacob William
We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no acceleration; resonance-broadening modifies this conclusion and allows for a continued Fermi-like acceleration process. This may affect the observed spectra of black hole accretion disks by accelerating relativistic particles into a quasi-powerlaw tail.
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-05-01
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Sneutrino dark matter in gauged inverse seesaw models for neutrinos.
An, Haipeng; Dev, P S Bhupal; Cai, Yi; Mohapatra, R N
2012-02-24
Extending the minimal supersymmetric standard model to explain small neutrino masses via the inverse seesaw mechanism can lead to a new light supersymmetric scalar partner which can play the role of inelastic dark matter (IDM). It is a linear combination of the superpartners of the neutral fermions in the theory (the light left-handed neutrino and two heavy standard model singlet neutrinos) which can be very light with mass in ~5-20 GeV range, as suggested by some current direct detection experiments. The IDM in this class of models has keV-scale mass splitting, which is intimately connected to the small Majorana masses of neutrinos. We predict the differential scattering rate and annual modulation of the IDM signal which can be testable at future germanium- and xenon-based detectors.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
1994-05-18
1801 Control System Architecture: The Standard and Non -Standard Models (Invited Paper) - M. E. Thuot, L. R. Dalesio, LANL...extracted beam intensity and feedback on lbe in lbe AGS, lbe non -linear space charge force can blow up lbe strength of lbe sextupole field to control lb...cromsings at the two experimental areas BO and DO, and bling the mas rnge accessible for discovery, a menu bar. In the menu bar there are controls to inject
Estimating seasonal evapotranspiration from temporal satellite images
Singh, Ramesh K.; Liu, Shu-Guang; Tieszen, Larry L.; Suyker, Andrew E.; Verma, Shashi B.
2012-01-01
Estimating seasonal evapotranspiration (ET) has many applications in water resources planning and management, including hydrological and ecological modeling. Availability of satellite remote sensing images is limited due to repeat cycle of satellite or cloud cover. This study was conducted to determine the suitability of different methods namely cubic spline, fixed, and linear for estimating seasonal ET from temporal remotely sensed images. Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model in conjunction with the wet METRIC (wMETRIC), a modified version of the METRIC model, was used to estimate ET on the days of satellite overpass using eight Landsat images during the 2001 crop growing season in Midwest USA. The model-estimated daily ET was in good agreement (R2 = 0.91) with the eddy covariance tower-measured daily ET. The standard error of daily ET was 0.6 mm (20%) at three validation sites in Nebraska, USA. There was no statistically significant difference (P > 0.05) among the cubic spline, fixed, and linear methods for computing seasonal (July–December) ET from temporal ET estimates. Overall, the cubic spline resulted in the lowest standard error of 6 mm (1.67%) for seasonal ET. However, further testing of this method for multiple years is necessary to determine its suitability.
Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions
NASA Astrophysics Data System (ADS)
Wrench, Alan A.
Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is developed based on the Minimum Variance approach. This new algorithm is shown to be closely related to Linear Prediction but produces slightly broader spectral peaks. Spectrum squaring is applied to both the new algorithm and standard Linear Prediction and their relative performance is assessed. (Abstract shortened by UMI.).
Modeling turbidity and flow at daily steps in karst using ARIMA/ARFIMA-GARCH error models
NASA Astrophysics Data System (ADS)
Massei, N.
2013-12-01
Hydrological and physico-chemical variations recorded at karst springs usually reflect highly non-linear processes and the corresponding time series are then very often also highly non-linear. Among others, turbidity, as an important parameter regarding water quality and management, is a very complex response of karst systems to rain events, involving direct transfer of particles from point-source recharge as well as resuspension of particles previously deposited and stored within the system. For those reasons, turbidity modeling has not been well taken in karst hydrological models so far. Most of the time, the modeling approaches would involve stochastic linear models such ARIMA-type models and their derivatives (ARMA, ARMAX, ARIMAX, ARFIMA...). Yet, linear models usually fail to represent well the whole (stochastic) process variability, and their residuals still contain useful information that can be used to either understand the whole variability or to enhance short-term predictability and forecasting. Model residuals are actually not i.i.d., which can be identified by the fact that squared residuals still present clear and significant serial correlation. Indeed, high (low) amplitudes are followed in time by high (low) amplitudes, which can be seen on residuals time series as periods of time during which amplitudes are higher (lower) then the mean amplitude. This is known as the ARCH effet (AutoRegressive Conditional Heteroskedasticity), and the corresponding non-linear process affecting residuals of a linear model can be modeled using ARCH or generalized ARCH (GARCH) non-linear modeling, which approaches are very well known in econometrics. Here we investigated the capability of ARIMA-GARCH error models to represent a ~20-yr daily turbidity time series recorded at a karst spring used for water supply of the city of Le Havre (Upper Normandy, France). ARIMA and ARFIMA models were used to represent the mean behavior of the time series and the residuals clearly appeared to present a pronounced ARCH effect, as confirmed by Ljung-Box and McLeod-Li tests. We then identified and fitted GARCH models to the residuals of ARIMA and ARFIMA models in order to model the conditional variance and volatility of the turbidity time series. The results eventually showed that serial correlation was succesfully removed in the last standardized residuals of the GARCH model, and hence that the ARIMA-GARCH error model appeared consistent for modeling such time series. The approach finally improved short-term (e.g a few steps-ahead) turbidity forecasting.
NASA Technical Reports Server (NTRS)
Melott, A. L.; Buchert, T.; Weib, A. G.
1995-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of scales. The Lagrangian theory of gravitational instability of Friedmann-Lemaitre cosmogonies is compared with numerical simulations. We study the dynamics of hierarchical models as a second step. In the first step we analyzed the performance of the Lagrangian schemes for pancake models, the difference being that in the latter models the initial power spectrum is truncated. This work probed the quasi-linear and weakly non-linear regimes. We here explore whether the results found for pancake models carry over to hierarchical models which are evolved deeply into the non-linear regime. We smooth the initial data by using a variety of filter types and filter scales in order to determine the optimal performance of the analytical models, as has been done for the 'Zel'dovich-approximation' - hereafter TZA - in previous work. We find that for spectra with negative power-index the second-order scheme performs considerably better than TZA in terms of statistics which probe the dynamics, and slightly better in terms of low-order statistics like the power-spectrum. However, in contrast to the results found for pancake models, where the higher-order schemes get worse than TZA at late non-linear stages and on small scales, we here find that the second-order model is as robust as TZA, retaining the improvement at later stages and on smaller scales. In view of these results we expect that the second-order truncated Lagrangian model is especially useful for the modelling of standard dark matter models such as Hot-, Cold-, and Mixed-Dark-Matter.
Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb
2014-10-01
Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.
Hou, Zhifei; Sun, Guoxiang; Guo, Yong
2016-01-01
The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard. PMID:27529425
Bardhan, Jaydeep P; Knepley, Matthew G
2014-10-07
We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley "bracelet" and "rod" test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, "Charge asymmetries in hydration of polar solutes," J. Phys. Chem. B 112, 2405-2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.
Donkin, Chris; Averell, Lee; Brown, Scott; Heathcote, Andrew
2009-11-01
Cognitive models of the decision process provide greater insight into response time and accuracy than do standard ANOVA techniques. However, such models can be mathematically and computationally difficult to apply. We provide instructions and computer code for three methods for estimating the parameters of the linear ballistic accumulator (LBA), a new and computationally tractable model of decisions between two or more choices. These methods-a Microsoft Excel worksheet, scripts for the statistical program R, and code for implementation of the LBA into the Bayesian sampling software WinBUGS-vary in their flexibility and user accessibility. We also provide scripts in R that produce a graphical summary of the data and model predictions. In a simulation study, we explored the effect of sample size on parameter recovery for each method. The materials discussed in this article may be downloaded as a supplement from http://brm.psychonomic-journals.org/content/supplemental.
Bardhan, Jaydeep P.; Knepley, Matthew G.
2014-01-01
We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys. Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry. PMID:25296776
NASA Astrophysics Data System (ADS)
Lahaie, Sébastien; Parkes, David C.
We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Liu, Y; Allen, R
2002-09-01
The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.
Generalized t-statistic for two-group classification.
Komori, Osamu; Eguchi, Shinto; Copas, John B
2015-06-01
In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
Intervertebral disc response to cyclic loading--an animal model.
Ekström, L; Kaigle, A; Hult, E; Holm, S; Rostedt, M; Hansson, T
1996-01-01
The viscoelastic response of a lumbar motion segment loaded in cyclic compression was studied in an in vivo porcine model (N = 7). Using surgical techniques, a miniaturized servohydraulic exciter was attached to the L2-L3 motion segment via pedicle fixation. A dynamic loading scheme was implemented, which consisted of one hour of sinusoidal vibration at 5 Hz, 50 N peak load, followed by one hour of restitution at zero load and one hour of sinusoidal vibration at 5 Hz, 100 N peak load. The force and displacement responses of the motion segment were sampled at 25 Hz. The experimental data were used for evaluating the parameters of two viscoelastic models: a standard linear solid model (three-parameter) and a linear Burger's fluid model (four-parameter). In this study, the creep behaviour under sinusoidal vibration at 5 Hz closely resembled the creep behaviour under static loading observed in previous studies. Expanding the three-parameter solid model into a four-parameter fluid model made it possible to separate out a progressive linear displacement term. This deformation was not fully recovered during restitution and is therefore an indication of a specific effect caused by the cyclic loading. High variability was observed in the parameters determined from the 50 N experimental data, particularly for the elastic modulus E1. However, at the 100 N load level, significant differences between the models were found. Both models accurately predicted the creep response under the first 800 s of 100 N loading, as displayed by mean absolute errors for the calculated deformation data from the experimental data of 1.26 and 0.97 percent for the solid and fluid models respectively. The linear Burger's fluid model, however, yielded superior predictions particularly for the initial elastic response.
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Estimating energy expenditure from heart rate in older adults: a case for calibration.
Schrack, Jennifer A; Zipunnikov, Vadim; Goldsmith, Jeff; Bandeen-Roche, Karen; Crainiceanu, Ciprian M; Ferrucci, Luigi
2014-01-01
Accurate measurement of free-living energy expenditure is vital to understanding changes in energy metabolism with aging. The efficacy of heart rate as a surrogate for energy expenditure is rooted in the assumption of a linear function between heart rate and energy expenditure, but its validity and reliability in older adults remains unclear. To assess the validity and reliability of the linear function between heart rate and energy expenditure in older adults using different levels of calibration. Heart rate and energy expenditure were assessed across five levels of exertion in 290 adults participating in the Baltimore Longitudinal Study of Aging. Correlation and random effects regression analyses assessed the linearity of the relationship between heart rate and energy expenditure and cross-validation models assessed predictive performance. Heart rate and energy expenditure were highly correlated (r=0.98) and linear regardless of age or sex. Intra-person variability was low but inter-person variability was high, with substantial heterogeneity of the random intercept (s.d. =0.372) despite similar slopes. Cross-validation models indicated individual calibration data substantially improves accuracy predictions of energy expenditure from heart rate, reducing the potential for considerable measurement bias. Although using five calibration measures provided the greatest reduction in the standard deviation of prediction errors (1.08 kcals/min), substantial improvement was also noted with two (0.75 kcals/min). These findings indicate standard regression equations may be used to make population-level inferences when estimating energy expenditure from heart rate in older adults but caution should be exercised when making inferences at the individual level without proper calibration.
Monotone Approximation for a Nonlinear Size and Class Age Structured Epidemic Model
2006-02-22
information if it does not display a currently valid OMB control number. 1. REPORT DATE 22 FEB 2006 2. REPORT TYPE 3. DATES COVERED 00-00-2006 to 00...follows from standard results, given the fact that they are all linear problems with local boundary conditions for Sinko-Streifer type systems. We...model, J. Franklin Inst., 297 (1974), 325-333. [14] K. E. Howard, A size and maturity structured model of cell dwarfism exhibiting chaotic be- havior
Phenomenology of TeV little string theory from holography.
Antoniadis, Ignatios; Arvanitaki, Asimina; Dimopoulos, Savas; Giveon, Amit
2012-02-24
We study the graviton phenomenology of TeV little string theory by exploiting its holographic gravity dual five-dimensional theory. This dual corresponds to a linear dilaton background with a large bulk that constrains the standard model fields on the boundary of space. The linear dilaton geometry produces a unique Kaluza-Klein graviton spectrum that exhibits a ~TeV mass gap followed by a near continuum of narrow resonances that are separated from each other by only ~30 GeV. Resonant production of these particles at the LHC is the signature of this framework that distinguishes it from large extra dimensions, where the Kaluza-Klein states are almost a continuum with no mass gap, and warped models, where the states are separated by a TeV.
On use of image quality metrics for perceptual blur modeling: image/video compression case
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn
2018-02-01
Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, R.
Waterflooding is the most commonly used secondary oil recovery technique. One of the requirements for understanding waterflood performance is a good knowledge of the basic properties of the reservoir rocks. This study is aimed at correlating rock-pore characteristics to oil recovery from various reservoir rock types and incorporating these properties into empirical models for Predicting oil recovery. For that reason, this report deals with the analyses and interpretation of experimental data collected from core floods and correlated against measurements of absolute permeability, porosity. wettability index, mercury porosimetry properties and irreducible water saturation. The results of the radial-core the radial-core andmore » linear-core flow investigations and the other associated experimental analyses are presented and incorporated into empirical models to improve the predictions of oil recovery resulting from waterflooding, for sandstone and limestone reservoirs. For the radial-core case, the standardized regression model selected, based on a subset of the variables, predicted oil recovery by waterflooding with a standard deviation of 7%. For the linear-core case, separate models are developed using common, uncommon and combination of both types of rock properties. It was observed that residual oil saturation and oil recovery are better predicted with the inclusion of both common and uncommon rock/fluid properties into the predictive models.« less
Solares, Santiago D
2016-01-01
Significant progress has been accomplished in the development of experimental contact-mode and dynamic-mode atomic force microscopy (AFM) methods designed to measure surface material properties. However, current methods are based on one-dimensional (1D) descriptions of the tip-sample interaction forces, thus neglecting the intricacies involved in the material behavior of complex samples (such as soft viscoelastic materials) as well as the differences in material response between the surface and the bulk. In order to begin to address this gap, a computational study is presented where the sample is simulated using an enhanced version of a recently introduced model that treats the surface as a collection of standard-linear-solid viscoelastic elements. The enhanced model introduces in-plane surface elastic forces that can be approximately related to a two-dimensional (2D) Young's modulus. Relevant cases are discussed for single- and multifrequency intermittent-contact AFM imaging, with focus on the calculated surface indentation profiles and tip-sample interaction force curves, as well as their implications with regards to experimental interpretation. A variety of phenomena are examined in detail, which highlight the need for further development of more physically accurate sample models that are specifically designed for AFM simulation. A multifrequency AFM simulation tool based on the above sample model is provided as supporting information.
Patterns of Growth after Kidney Transplantation among Children with ESRD
Franke, Doris; Thomas, Lena; Steffens, Rena; Pavičić, Leo; Gellermann, Jutta; Froede, Kerstin; Querfeld, Uwe; Haffner, Dieter
2015-01-01
Background and objectives Poor linear growth is a frequent complication of CKD. This study evaluated the effect of kidney transplantation on age-related growth of linear body segments in pediatric renal transplant recipients who were enrolled from May 1998 until August 2013 in the CKD Growth and Development observational cohort study. Design, setting, participants, & measurements Linear growth (height, sitting height, arm and leg lengths) was prospectively investigated during 1639 annual visits in a cohort of 389 pediatric renal transplant recipients ages 2–18 years with a median follow-up of 3.4 years (interquartile range, 1.9–5.9 years). Linear mixed-effects models were used to assess age-related changes and predictors of linear body segments. Results During early childhood, patients showed lower mean SD scores (SDS) for height (−1.7) and a markedly elevated sitting height index (ratio of sitting height to total body height) compared with healthy children (1.6 SDS), indicating disproportionate stunting (each P<0.001). After early childhood a sustained increase in standardized leg length and a constant decrease in standardized sitting height were noted (each P<0.001), resulting in significant catch-up growth and almost complete normalization of sitting height index by adult age (0.4 SDS; P<0.01 versus age 2–4 years). Time after transplantation, congenital renal disease, bone maturation, steroid exposure, degree of metabolic acidosis and anemia, intrauterine growth restriction, and parental height were significant predictors of linear body dimensions and body proportions (each P<0.05). Conclusions Children with ESRD present with disproportionate stunting. In pediatric renal transplant recipients, a sustained increase in standardized leg length and total body height is observed from preschool until adult age, resulting in restoration of body proportions in most patients. Reduction of steroid exposure and optimal metabolic control before and after transplantation are promising measures to further improve growth outcome. PMID:25352379
Patterns of growth after kidney transplantation among children with ESRD.
Franke, Doris; Thomas, Lena; Steffens, Rena; Pavičić, Leo; Gellermann, Jutta; Froede, Kerstin; Querfeld, Uwe; Haffner, Dieter; Živičnjak, Miroslav
2015-01-07
Poor linear growth is a frequent complication of CKD. This study evaluated the effect of kidney transplantation on age-related growth of linear body segments in pediatric renal transplant recipients who were enrolled from May 1998 until August 2013 in the CKD Growth and Development observational cohort study. Linear growth (height, sitting height, arm and leg lengths) was prospectively investigated during 1639 annual visits in a cohort of 389 pediatric renal transplant recipients ages 2-18 years with a median follow-up of 3.4 years (interquartile range, 1.9-5.9 years). Linear mixed-effects models were used to assess age-related changes and predictors of linear body segments. During early childhood, patients showed lower mean SD scores (SDS) for height (-1.7) and a markedly elevated sitting height index (ratio of sitting height to total body height) compared with healthy children (1.6 SDS), indicating disproportionate stunting (each P<0.001). After early childhood a sustained increase in standardized leg length and a constant decrease in standardized sitting height were noted (each P<0.001), resulting in significant catch-up growth and almost complete normalization of sitting height index by adult age (0.4 SDS; P<0.01 versus age 2-4 years). Time after transplantation, congenital renal disease, bone maturation, steroid exposure, degree of metabolic acidosis and anemia, intrauterine growth restriction, and parental height were significant predictors of linear body dimensions and body proportions (each P<0.05). Children with ESRD present with disproportionate stunting. In pediatric renal transplant recipients, a sustained increase in standardized leg length and total body height is observed from preschool until adult age, resulting in restoration of body proportions in most patients. Reduction of steroid exposure and optimal metabolic control before and after transplantation are promising measures to further improve growth outcome. Copyright © 2015 by the American Society of Nephrology.
Laboratory measurements of the millimeter-wave spectra of calcium isocyanide
NASA Astrophysics Data System (ADS)
Steimle, Timothy C.; Saito, Shuji; Takano, Shuro
1993-06-01
The ground state of CaNC is presently characterized by mm-wave spectroscopy, using a standard Hamiltonian linear molecule model to analyze the spectrum. The resulting spectroscopic parameters were used to predict the transition frequencies and Einstein A-coefficients, which should make possible a quantitative astrophysical search for CaNC.
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
Modelling Problem-Solving Situations into Number Theory Tasks: The Route towards Generalisation
ERIC Educational Resources Information Center
Papadopoulos, Ioannis; Iatridou, Maria
2010-01-01
This paper examines the way two 10th graders cope with a non-standard generalisation problem that involves elementary concepts of number theory (more specifically linear Diophantine equations) in the geometrical context of a rectangle's area. Emphasis is given on how the students' past experience of problem solving (expressed through interplay…
ERIC Educational Resources Information Center
Bryan, Kurt
2011-01-01
This article presents an application of standard undergraduate ODE techniques to a modern engineering problem, that of using a tuned mass damper to control the vibration of a skyscraper. This material can be used in any ODE course in which the students have been familiarized with basic spring-mass models, resonance, and linear systems of ODEs.…
Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter
NASA Astrophysics Data System (ADS)
Guo, Xiang-Gui; Yang, Guang-Hong
2012-04-01
This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.
Microwave-field-driven acoustic modes in DNA.
Edwards, G S; Davis, C C; Saffer, J D; Swicord, M L
1985-01-01
The direct coupling of a microwave field to selected DNA molecules is demonstrated using standard dielectrometry. The absorption is resonant with a typical lifetime of 300 ps. Such a long lifetime is unexpected for DNA in aqueous solution at room temperature. Resonant absorption at fundamental and harmonic frequencies for both supercoiled circular and linear DNA agrees with an acoustic mode model. Our associated acoustic velocities for linear DNA are very close to the acoustic velocity of the longitudinal acoustic mode independently observed on DNA fibers using Brillouin spectroscopy. The difference in acoustic velocities for supercoiled circular and linear DNA is discussed in terms of solvent shielding of the nonbonded potentials in DNA. Images FIGURE 5 FIGURE 6 FIGURE 7 PMID:3893557
Imprints of non-standard dark energy and dark matter models on the 21cm intensity map power spectrum
NASA Astrophysics Data System (ADS)
Carucci, Isabella P.; Corasaniti, Pier-Stefano; Viel, Matteo
2017-12-01
We study the imprint of non-standard dark energy (DE) and dark matter (DM) models on the 21cm intensity map power spectra from high-redshift neutral hydrogen (HI) gas. To this purpose we use halo catalogs from N-body simulations of dynamical DE models and DM scenarios which are as successful as the standard Cold Dark Matter model with Cosmological Constant (ΛCDM) at interpreting available cosmological observations. We limit our analysis to halo catalogs at redshift z=1 and 2.3 which are common to all simulations. For each catalog we model the HI distribution by using a simple prescription to associate the HI gas mass to N-body halos. We find that the DE models leave a distinct signature on the HI spectra across a wide range of scales, which correlates with differences in the halo mass function and the onset of the non-linear regime of clustering. In the case of the non-standard DM model significant differences of the HI spectra with respect to the ΛCDM model only arise from the suppressed abundance of low mass halos. These cosmological model dependent features also appear in the 21cm spectra. In particular, we find that future SKA measurements can distinguish the imprints of DE and DM models at high statistical significance.
Ying, Wenjun; Henriquez, Craig S
2007-04-01
A novel hybrid finite element method (FEM) for modeling the response of passive and active biological membranes to external stimuli is presented. The method is based on the differential equations that describe the conservation of electric flux and membrane currents. By introducing the electric flux through the cell membrane as an additional variable, the algorithm decouples the linear partial differential equation part from the nonlinear ordinary differential equation part that defines the membrane dynamics of interest. This conveniently results in two subproblems: a linear interface problem and a nonlinear initial value problem. The linear interface problem is solved with a hybrid FEM. The initial value problem is integrated by a standard ordinary differential equation solver such as the Euler and Runge-Kutta methods. During time integration, these two subproblems are solved alternatively. The algorithm can be used to model the interaction of stimuli with multiple cells of almost arbitrary geometries and complex ion-channel gating at the plasma membrane. Numerical experiments are presented demonstrating the uses of the method for modeling field stimulation and action potential propagation.
da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues
2015-01-01
This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
The XXth International Workshop High Energy Physics and Quantum Field Theory
NASA Astrophysics Data System (ADS)
The Workshop continues a series of workshops started by the Skobeltsyn Institute of Nuclear Physics of Lomonosov Moscow State University (SINP MSU) in 1985 and conceived with the purpose of presenting topics of current interest and providing a stimulating environment for scientific discussion on new developments in theoretical and experimental high energy physics and physical programs for future colliders. Traditionally the list of workshop attendees includes a great number of active young scientists and students from Russia and other countries. This year Workshop is organized jointly by the SINP MSU and the Southern Federal University (SFedU) and will take place in the holiday hotel "Luchezarniy" (Effulgent) situated on the Black Sea shore in a picturesque natural park in the suburb of the largest Russian resort city Sochi - the host city of the XXII Olympic Winter Games to be held in 2014. The main topics to be covered are: Experimental results from the LHC. Tevatron summary: the status of the Standard Model and the boundaries on BSM physics. Future physics at Linear Colliders and super B-factories. Extensions of the Standard Model and their phenomenological consequences at the LHC and Linear Colliders: SUSY extensions of the Standard Model; particle interactions in space-time with extra dimensions; strings, quantum groups and new ideas from modern algebra and geometry. Higher order corrections and resummations for collider phenomenology. Automatic calculations of Feynman diagrams and Monte Carlo simulations. LHC/LC and astroparticle/cosmology connections. Modern nuclear physics and relativistic nucleous-nucleous collisions.
Effect of step width manipulation on tibial stress during running.
Meardon, Stacey A; Derrick, Timothy R
2014-08-22
Narrow step width has been linked to variables associated with tibial stress fracture. The purpose of this study was to evaluate the effect of step width on bone stresses using a standardized model of the tibia. 15 runners ran at their preferred 5k running velocity in three running conditions, preferred step width (PSW) and PSW±5% of leg length. 10 successful trials of force and 3-D motion data were collected. A combination of inverse dynamics, musculoskeletal modeling and beam theory was used to estimate stresses applied to the tibia using subject-specific anthropometrics and motion data. The tibia was modeled as a hollow ellipse. Multivariate analysis revealed that tibial stresses at the distal 1/3 of the tibia differed with step width manipulation (p=0.002). Compression on the posterior and medial aspect of the tibia was inversely related to step width such that as step width increased, compression on the surface of tibia decreased (linear trend p=0.036 and 0.003). Similarly, tension on the anterior surface of the tibia decreased as step width increased (linear trend p=0.029). Widening step width linearly reduced shear stress at all 4 sites (p<0.001 for all). The data from this study suggests that stresses experienced by the tibia during running were influenced by step width when using a standardized model of the tibia. Wider step widths were generally associated with reduced loading of the tibia and may benefit runners at risk of or experiencing stress injury at the tibia, especially if they present with a crossover running style. Copyright © 2014 Elsevier Ltd. All rights reserved.
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather than the coefficients. Moreover, use of cubic regression splines provides biological meaningful growth velocity and acceleration curves despite increased complexity in coefficient interpretation. Through this stepwise approach, we provide a set of tools to model longitudinal childhood data for non-statisticians using linear mixed-effect models.
Improved formalism for precision Higgs coupling fits
NASA Astrophysics Data System (ADS)
Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; Karl, Robert; List, Jenny; Ogawa, Tomohisa; Peskin, Michael E.; Tian, Junping
2018-03-01
Future e+e- colliders give the promise of model-independent determinations of the couplings of the Higgs boson. In this paper, we present an improved formalism for extracting Higgs boson couplings from e+e- data, based on the effective field theory description of corrections to the Standard Model. We apply this formalism to give projections of Higgs coupling accuracies for stages of the International Linear Collider and for other proposed e+e- colliders.
Battery Life Estimator Manual Linear Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2009-08-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
NASA Astrophysics Data System (ADS)
Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.
2016-03-01
The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.
Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei
2013-08-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.
Jin, Xiaochen; Fu, Zhiqiang; Li, Xuehua; Chen, Jingwen
2017-03-22
The octanol-air partition coefficient (K OA ) is a key parameter describing the partition behavior of organic chemicals between air and environmental organic phases. As the experimental determination of K OA is costly, time-consuming and sometimes limited by the availability of authentic chemical standards for the compounds to be determined, it becomes necessary to develop credible predictive models for K OA . In this study, a polyparameter linear free energy relationship (pp-LFER) model for predicting K OA at 298.15 K and a novel model incorporating pp-LFERs with temperature (pp-LFER-T model) were developed from 795 log K OA values for 367 chemicals at different temperatures (263.15-323.15 K), and were evaluated with the OECD guidelines on QSAR model validation and applicability domain description. Statistical results show that both models are well-fitted, robust and have good predictive capabilities. Particularly, the pp-LFER model shows a strong predictive ability for polyfluoroalkyl substances and organosilicon compounds, and the pp-LFER-T model maintains a high predictive accuracy within a wide temperature range (263.15-323.15 K).
Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei
2013-01-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200
Changes in Clavicle Length and Maturation in Americans: 1840-1980.
Langley, Natalie R; Cridlin, Sandra
2016-01-01
Secular changes refer to short-term biological changes ostensibly due to environmental factors. Two well-documented secular trends in many populations are earlier age of menarche and increasing stature. This study synthesizes data on maximum clavicle length and fusion of the medial epiphysis in 1840-1980 American birth cohorts to provide a comprehensive assessment of developmental and morphological change in the clavicle. Clavicles from the Hamann-Todd Human Osteological Collection (n = 354), McKern and Stewart Korean War males (n = 341), Forensic Anthropology Data Bank (n = 1,239), and the McCormick Clavicle Collection (n = 1,137) were used in the analysis. Transition analysis was used to evaluate fusion of the medial epiphysis (scored as unfused, fusing, or fused). Several statistical treatments were used to assess fluctuations in maximum clavicle length. First, Durbin-Watson tests were used to evaluate autocorrelation, and a local regression (LOESS) was used to identify visual shifts in the regression slope. Next, piecewise regression was used to fit linear regression models before and after the estimated breakpoints. Multiple starting parameters were tested in the range determined to contain the breakpoint, and the model with the smallest mean squared error was chosen as the best fit. The parameters from the best-fit models were then used to derive the piecewise models, which were compared with the initial simple linear regression models to determine which model provided the best fit for the secular change data. The epiphyseal union data indicate a decline in the age at onset of fusion since the early twentieth century. Fusion commences approximately four years earlier in mid- to late twentieth-century birth cohorts than in late nineteenth- and early twentieth-century birth cohorts. However, fusion is completed at roughly the same age across cohorts. The most significant decline in age at onset of epiphyseal union appears to have occurred since the mid-twentieth century. LOESS plots show a breakpoint in the clavicle length data around the mid-twentieth century in both sexes, and piecewise regression models indicate a significant decrease in clavicle length in the American population after 1940. The piecewise model provides a slightly better fit than the simple linear model. Since the model standard error is not substantially different from the piecewise model, an argument could be made to select the less complex linear model. However, we chose the piecewise model to detect changes in clavicle length that are overfitted with a linear model. The decrease in maximum clavicle length is in line with a documented narrowing of the American skeletal form, as shown by analyses of cranial and facial breadth and bi-iliac breadth of the pelvis. Environmental influences on skeletal form include increases in body mass index, health improvements, improved socioeconomic status, and elimination of infectious diseases. Secular changes in bony dimensions and skeletal maturation stipulate that medical and forensic standards used to deduce information about growth, health, and biological traits must be derived from modern populations.
Understanding pyrotechnic shock dynamics and response attenuation over distance
NASA Astrophysics Data System (ADS)
Ott, Richard J.
Pyrotechnic shock events used during stage separation on rocket vehicles produce high amplitude short duration structural response that can lead to malfunction or degradation of electronic components, cracks and fractures in brittle materials, local plastic deformation, and can cause materials to experience accelerated fatigue life. These transient loads propagate as waves through the structural media losing energy as they travel outward from the source. This work assessed available test data in an effort to better understand attenuation characteristics associated with wave propagation and attempted to update a historical standard defined by the Martin Marietta Corporation in the late 1960's using out of date data acquisition systems. Two data sets were available for consideration. The first data set came from a test that used a flight like cylinder used in NASA's Ares I-X program, and the second from a test conducted with a flat plate. Both data sets suggested that the historical standard was not a conservative estimate of shock attenuation with distance, however, the variation in the test data did not lend to recommending an update to the standard. Beyond considering attenuation with distance an effort was made to model the flat plate configuration using finite element analysis. The available flat plate data consisted of three groups of tests, each with a unique charge density linear shape charge (LSC) used to cut an aluminum plate. The model was tuned to a representative test using the lowest charge density LSC as input. The correlated model was then used to predict the other two cases by linearly scaling the input load based on the relative difference in charge density. The resulting model predictions were then compared with available empirical data. Aside from differences in amplitude due to nonlinearities associated with scaling the charge density of the LSC, the model predictions matched the available test data reasonably well. Finally, modeling best practices were recommended when using industry standard software to predict shock response on structures. As part of the best practices documented, a frequency dependent damping schedule that can be used in model development when no data is available is provided.
Model identification of new heavy Z‧ bosons at ILC with polarized beams
NASA Astrophysics Data System (ADS)
Pankov, A. A.; Tsytrinov, A. V.
2017-12-01
Extra neutral gauge bosons, Z‧s, are predicted by many theoretical scenarios of physics beyond the Standard Model, and intensive searches for their signatures will be performed at present and future high energy colliders. It is quite possible that Z‧s are heavy enough to lie beyond the discovery reach expected at the CERN Large Hadron Collider LHC, in which case only indirect signatures of Z‧ exchanges may occur at future colliders, through deviations of the measured cross sections from the Standard Model predictions. We here discuss in this context the expected sensitivity to Z‧ parameters of fermion-pair production cross sections at the planned International Linear Collider (ILC), especially as regards the potential of distinguishing different Z‧ models once such deviations are observed. Specifically, we evaluate the discovery and identification reaches on Z‧ gauge bosons pertinent to the E 6, LR, ALR, and SSM classes of models at the ILC.
Infrared weak corrections to strongly interacting gauge boson scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciafaloni, Paolo; Urbano, Alfredo
2010-04-15
We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.
Lorenzo-Seva, Urbano; Ferrando, Pere J
2011-03-01
We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
Dose Rate Effects in Linear Bipolar Transistors
NASA Technical Reports Server (NTRS)
Johnston, Allan; Swimm, Randall; Harris, R. D.; Thorbourn, Dennis
2011-01-01
Dose rate effects are examined in linear bipolar transistors at high and low dose rates. At high dose rates, approximately 50% of the damage anneals at room temperature, even though these devices exhibit enhanced damage at low dose rate. The unexpected recovery of a significant fraction of the damage after tests at high dose rate requires changes in existing test standards. Tests at low temperature with a one-second radiation pulse width show that damage continues to increase for more than 3000 seconds afterward, consistent with predictions of the CTRW model for oxides with a thickness of 700 nm.
Central Limit Theorems for Linear Statistics of Heavy Tailed Random Matrices
NASA Astrophysics Data System (ADS)
Benaych-Georges, Florent; Guionnet, Alice; Male, Camille
2014-07-01
We show central limit theorems (CLT) for the linear statistics of symmetric matrices with independent heavy tailed entries, including entries in the domain of attraction of α-stable laws and entries with moments exploding with the dimension, as in the adjacency matrices of Erdös-Rényi graphs. For the second model, we also prove a central limit theorem of the moments of its empirical eigenvalues distribution. The limit laws are Gaussian, but unlike the case of standard Wigner matrices, the normalization is the one of the classical CLT for independent random variables.
Physics with e{sup +}e{sup -} Linear Colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barklow, Timothy L
2003-05-05
We describe the physics potential of e{sup +}e{sup -} linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosonsmore » and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e{sup +}e{sup -} linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines.« less
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
Christman, Stephen D; Weaver, Ryan
2008-05-01
The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.
The development of global GRAPES 4DVAR
NASA Astrophysics Data System (ADS)
Liu, Yongzhu
2017-04-01
Four-dimensional variation data assimilation (4DVAR) has given a great contribution to the improvement of NWP system over the past twenty years. Therefore, our strategy is to develop an operational global 4D-Var system from the outset. The aim at the paper is to introduce the development of the global GRAPES four-dimensional variation data assimilation (4DVAR) using incremental analysis schemes and to presents results of a comparison between 4DVAR using 6-hour assimilation window and simplified physics during the minimization with three-dimensional variation data assimilation (3DVAR). The dynamical cores of the tangent-linear and adjoint models are developed directly based on the non-hydrostatic forecast model. In addition, the standard correctness checks have been performed. As well as the development adjoint codes, most of our work is focused on improving the computational efficiency since the bulk of the computational cost of 4D-Var is in the integration of the tangent-linear and adjoint models. In terms of tangent-linear model, the wall-clock time is reduced to about 1.2 times as much as one of nonlinear model through the optimizing of the software framework. The significant computational cost savings on adjoint model result from the removing the redundant recompilations of model trajectories. It is encouraging that the wall-clock time of adjoint model is less than 1.5 times as much as one of nonlinear model. The current difficulty is that the numerical scheme used within the linear model is based on strategically on the numeric of the corresponding nonlinear model. Further computational acceleration should be expected from the improvement on nonlinear numerical algorithm. A series of linearized physical parameterization schemes has been developed to improve the representation of perturbed fields in the linear model. It consists of horizontal and vertical diffusion, sub-grid scale orographic gravity wave drag, large-scale condensation and cumulus convection schemes. We also found the straightforward linearization based on the nonlinear physical scheme might lead to significant growing of spurious unstable perturbations. It is essential to simplify the linear physics with respect to the non-linear schemes. The improvement on the perturbed fields in the tangent-linear model is visible with the linear physics included, especially at the low level. GRAPES variation data assimilation system adopts the incremental approach. The work is ongoing to develop a pre-operational 4DVAR suite with 0.25° outer loop resolution and multiple outer-loops configurations. One 4DVAR analysis using 6-hour assimilation windows can be finished within 40-minutes when using the available conventional and satellite data. In summary, it was found that the analysis over the northern, southern hemispheres, tropical region and East Asian area of GRAPES 4DVAR performed better than GRAPES 3DVAR for one month experiments. Moreover, the forecast results show that northern and southern extra-tropical scores for GRAPES 4DVAR are already better than GRAPES 3DVAR, but the tropical performance needs further investigations. Therefore, the subsequent main improvements will aim to enhance its computational efficiency and accuracy in 2017. The global GRAPES 4DVAR is planned for operation in 2018.
The capability and constraint model of recoverability: An integrated theory of continuity planning.
Lindstedt, David
2017-01-01
While there are best practices, good practices, regulations and standards for continuity planning, there is no single model to collate and sort their various recommended activities. To address this deficit, this paper presents the capability and constraint model of recoverability - a new model to provide an integrated foundation for business continuity planning. The model is non-linear in both construct and practice, thus allowing practitioners to remain adaptive in its application. The paper presents each facet of the model, outlines the model's use in both theory and practice, suggests a subsequent approach that arises from the model, and discusses some possible ramifications to the industry.
NASA Astrophysics Data System (ADS)
Alimi, J.-M.; Füzfa, A.; Boucher, V.; Rasera, Y.; Courtin, J.; Corasaniti, P.-S.
2010-01-01
Quintessence has been proposed to account for dark energy (DE) in the Universe. This component causes a typical modification of the background cosmic expansion, which, in addition to its clustering properties, can leave a potentially distinctive signature on large-scale structures. Many previous studies have investigated this topic, particularly in relation to the non-linear regime of structure formation. However, no careful pre-selection of viable quintessence models with high precision cosmological data was performed. Here we show that this has led to a misinterpretation (and underestimation) of the imprint of quintessence on the distribution of large-scale structures. To this purpose, we perform a likelihood analysis of the combined Supernova Ia UNION data set and Wilkinson Microwave Anisotropy Probe 5-yr data to identify realistic quintessence models. These are specified by different model parameter values, but still statistically indistinguishable from the vanilla Λ cold dark matter (ΛCDM). Differences are especially manifest in the predicted amplitude and shape of the linear matter power spectrum though these remain within the uncertainties of the Sloan Digital Sky Survey data. We use these models as a benchmark for studying the clustering properties of dark matter haloes by performing a series of high-resolution N-body simulations. In this first paper, we specifically focus on the non-linear matter power spectrum. We find that realistic quintessence models allow for relevant differences of the dark matter distribution with respect to the ΛCDM scenario well into the non-linear regime, with deviations of up to 40 per cent in the non-linear power spectrum. Such differences are shown to depend on the nature of DE, as well as the scale and epoch considered. At small scales (k ~ 1-5hMpc-1, depending on the redshift), the structure formation process is about 20 per cent more efficient than in ΛCDM. We show that these imprints are a specific record of the cosmic structure formation history in DE cosmologies and therefore cannot be accounted for in standard fitting functions of the non-linear matter power spectrum.
Esserman, Denise A.; Moore, Charity G.; Roth, Mary T.
2009-01-01
Older community dwelling adults often take multiple medications for numerous chronic diseases. Non-adherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina based population of older adults, where each medication an individual was taking was classified as adherent or non-adherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person. PMID:20414358
Multivariate meta-analysis for non-linear and other multi-parameter associations
Gasparrini, A; Armstrong, B; Kenward, M G
2012-01-01
In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043
Monthly monsoon rainfall forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Ganti, Ravikumar
2014-10-01
Indian agriculture sector heavily depends on monsoon rainfall for successful harvesting. In the past, prediction of rainfall was mainly performed using regression models, which provide reasonable accuracy in the modelling and forecasting of complex physical systems. Recently, Artificial Neural Networks (ANNs) have been proposed as efficient tools for modelling and forecasting. A feed-forward multi-layer perceptron type of ANN architecture trained using the popular back-propagation algorithm was employed in this study. Other techniques investigated for modeling monthly monsoon rainfall include linear and non-linear regression models for comparison purposes. The data employed in this study include monthly rainfall and monthly average of the daily maximum temperature in the North Central region in India. Specifically, four regression models and two ANN model's were developed. The performance of various models was evaluated using a wide variety of standard statistical parameters and scatter plots. The results obtained in this study for forecasting monsoon rainfalls using ANNs have been encouraging. India's economy and agricultural activities can be effectively managed with the help of the availability of the accurate monsoon rainfall forecasts.
Testing the Standard Model with the Primordial Inflation Explorer
NASA Technical Reports Server (NTRS)
Kogut, Alan J.
2011-01-01
The Primordial Inflation Explorer is an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10A{-3) at 5 standard deviations. The rich PIXIE data set will also constrain physical processes ranging from Big Bang cosmology to the nature of the first stars to physical conditions within the interstellar medium of the Galaxy. I describe the PIXIE instrument and mission architecture needed to detect the inflationary signature using only 4 semiconductor bolometers.
Spinning Rocket Simulator Turntable Design
NASA Technical Reports Server (NTRS)
Miles, Robert W.
2001-01-01
Contained herein is the research and data acquired from the Turntable Design portion of the Spinning Rocket Simulator (SRS) project. The SRS Project studies and eliminates the effect of coning on thrust-propelled spacecraft. This design and construction of the turntable adds a structural support for the SRS model and two degrees of freedom. The two degrees of freedom, radial and circumferential, will help develop a simulated thrust force perpendicular to the plane of the spacecraft model while undergoing an unstable coning motion. The Turntable consists of a ten-foot linear track mounted to a sprocket and press-fit to a thrust bearing. A two-inch high column grounded by a Triangular Baseplate supports this bearing and houses the slip rings and pressurized, air-line swivel. The thrust bearing allows the entire system to rotate under the moment applied through the chain-driven sprocket producing a circumferential degree of freedom. The radial degree of freedom is given to the model through the helically threaded linear track. This track allows the Model Support and Counter Balance to simultaneously reposition according to the coning motion of the Model. Two design factors that hinder the linear track are bending and twist due to torsion. A Standard Aluminum "C" channel significantly reduces these two deflections. Safety considerations dictate the design of all the components involved in this project.
Constructing exact perturbations of the standard cosmological models
NASA Astrophysics Data System (ADS)
Sopuerta, Carlos F.
1999-11-01
In this paper we show a procedure to construct cosmological models which, according to a covariant criterion, can be seen as exact (nonlinear) perturbations of the standard Friedmann-Lemaı⁁tre-Robertson-Walker (FLRW) cosmological models. The special properties of this procedure will allow us to select some of the characteristics of the models and also to study in depth their main geometrical and physical features. In particular, the models are conformally stationary, which means that they are compatible with the existence of isotropic radiation, and the observers that would measure this isotropy are rotating. Moreover, these models have two arbitrary functions (one of them is a complex function) which control their main properties, and in general they do not have any isometry. We study two examples, focusing on the case when the underlying FLRW models are flat dust models. In these examples we compare our results with those of the linearized theory of perturbations about a FLRW background.
NASA Technical Reports Server (NTRS)
Sheen, Jyh-Jong; Bishop, Robert H.
1992-01-01
The feedback linearization technique is applied to the problem of spacecraft attitude control and momentum management with control moment gyros (CMGs). The feedback linearization consists of a coordinate transformation, which transforms the system to a companion form, and a nonlinear feedback control law to cancel the nonlinear dynamics resulting in a linear equivalent model. Pole placement techniques are then used to place the closed-loop poles. The coordinate transformation proposed here evolves from three output functions of relative degree four, three, and two, respectively. The nonlinear feedback control law is presented. Stability in a neighborhood of a controllable torque equilibrium attitude (TEA) is guaranteed and this fact is demonstrated by the simulation results. An investigation of the nonlinear control law shows that singularities exist in the state space outside the neighborhood of the controllable TEA. The nonlinear control law is simplified by a standard linearization technique and it is shown that the linearized nonlinear controller provides a natural way to select control gains for the multiple-input, multiple-output system. Simulation results using the linearized nonlinear controller show good performance relative to the nonlinear controller in the neighborhood of the TEA.
Modeling Non-Linear Material Properties in Composite Materials
2016-06-28
2 Figure 2: Implementation of multiscale enrichment into FEA ...corresponding to the mth degree of freedom, and is the associated degree of freedom. For FEA , the standard shape function, NI, which can be...varies depending on the governing method. In this presentation we will focus in the FEA approach. Reference [4] gives complete details on the
ERIC Educational Resources Information Center
Si, Yajuan; Reiter, Jerome P.
2013-01-01
In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…
ERIC Educational Resources Information Center
McCormick, Meghan P.; O'Connor, Erin E.
2015-01-01
Using data from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development (N = 1,364) and 2-level hierarchical linear models with site fixed effects, we examined between- and within-child associations between teacher-child relationship closeness and conflict and standardized measures of children's…
ERIC Educational Resources Information Center
Darling, Nancy; Cumsille, Patricio; Loreto Martinez, M.
2007-01-01
Adolescents' agreement with parental standards and beliefs about the legitimacy of parental authority and their own obligation to obey were used to predict adolescents' obedience, controlling for parental monitoring, rules, and rule enforcement. Hierarchical linear models were used to predict both between-adolescent and within-adolescent,…
Handling Math Expressions in Economics: Recoding Spreadsheet Teaching Tool of Growth Models
ERIC Educational Resources Information Center
Moro-Egido, Ana I.; Pedauga, Luis E.
2017-01-01
In the present paper, we develop a teaching methodology for economic theory. The main contribution of this paper relies on combining the interactive characteristics of spreadsheet programs such as Excel and Unicode plain-text linear format for mathematical expressions. The advantage of Unicode standard rests on its ease for writing and reading…
The Development of Reading Ability in First and Second Grade. Technical Report No. 516.
ERIC Educational Resources Information Center
Meyer, Linda A.; And Others
This study determined how children develop reading ability in first and second grade. Subjects, approximately 315 children from 3 school districts in the midwest, were given a series of standardized and customized measures of reading comprehension. Linear structural models were developed at both grade levels using LISREL to explain variance in…
Fusion yield: Guderley model and Tsallis statistics
NASA Astrophysics Data System (ADS)
Haubold, H. J.; Kumar, D.
2011-02-01
The reaction rate probability integral is extended from Maxwell-Boltzmann approach to a more general approach by using the pathway model introduced by Mathai in 2005 (A pathway to matrix-variate gamma and normal densities. Linear Algebr. Appl. 396, 317-328). The extended thermonuclear reaction rate is obtained in the closed form via a Meijer's G-function and the so-obtained G-function is represented as a solution of a homogeneous linear differential equation. A physical model for the hydrodynamical process in a fusion plasma-compressed and laser-driven spherical shock wave is used for evaluating the fusion energy integral by integrating the extended thermonuclear reaction rate integral over the temperature. The result obtained is compared with the standard fusion yield obtained by Haubold and John in 1981 (Analytical representation of the thermonuclear reaction rate and fusion energy production in a spherical plasma shock wave. Plasma Phys. 23, 399-411). An interpretation for the pathway parameter is also given.
Testing Linear Temporal Logic Formulae on Finite Execution Traces
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Norvig, Peter (Technical Monitor)
2001-01-01
We present an algorithm for efficiently testing Linear Temporal Logic (LTL) formulae on finite execution traces. The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive. In most past applications of LTL. theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications. Such tests correspond to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property. We then suggest an optimized algorithm based on transforming LTL formulae. The work is done using the Maude rewriting system. which turns out to provide a perfect notation and an efficient rewriting engine for performing these experiments.
Theory of the Lattice Boltzmann Equation: Symmetry properties of Discrete Velocity Sets
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Luo, Li-Shi
2007-01-01
In the lattice Boltzmann equation, continuous particle velocity space is replaced by a finite dimensional discrete set. The number of linearly independent velocity moments in a lattice Boltzmann model cannot exceed the number of discrete velocities. Thus, finite dimensionality introduces linear dependencies among the moments that do not exist in the exact continuous theory. Given a discrete velocity set, it is important to know to exactly what order moments are free of these dependencies. Elementary group theory is applied to the solution of this problem. It is found that by decomposing the velocity set into subsets that transform among themselves under an appropriate symmetry group, it becomes relatively straightforward to assess the behavior of moments in the theory. The construction of some standard two- and three-dimensional models is reviewed from this viewpoint, and procedures for constructing some new higher dimensional models are suggested.
Improved formalism for precision Higgs coupling fits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon
Future e +e – colliders give the promise of model-independent determinations of the couplings of the Higgs boson. In this paper, we present an improved formalism for extracting Higgs boson couplings from e +e – data, based on the effective field theory description of corrections to the Standard Model. Lastly, we apply this formalism to give projections of Higgs coupling accuracies for stages of the International Linear Collider and for other proposed e +e – colliders.
Improved formalism for precision Higgs coupling fits
Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; ...
2018-03-20
Future e +e – colliders give the promise of model-independent determinations of the couplings of the Higgs boson. In this paper, we present an improved formalism for extracting Higgs boson couplings from e +e – data, based on the effective field theory description of corrections to the Standard Model. Lastly, we apply this formalism to give projections of Higgs coupling accuracies for stages of the International Linear Collider and for other proposed e +e – colliders.
Cancer mortality among coke oven workers.
Redmond, C K
1983-01-01
The OSHA standard for coke oven emissions, which went into effect in January 1977, sets a permissible exposure limit to coke oven emissions of 150 micrograms/m3 benzene-soluble fraction of total particulate matter (BSFTPM). Review of the epidemiologic evidence for the standard indicates an excess relative risk for lung cancer as high as 16-fold in topside coke oven workers with 15 years of exposure or more. There is also evidence for a consistent dose-response relationship in lung cancer mortality when duration and location of employment at the coke ovens are considered. Dose-response models fitted to these same data indicate that, while excess risks may still occur under the OSHA standard, the predicted levels of increased relative risk would be about 30-50% if a linear dose-response model is assumed and 3-7% if a quadratic model is assumed. Lung cancer mortality data for other steelworkers suggest the predicted excess risk has probably been somewhat overestimated, but lack of information on important confounding factors limits further dose-response analysis. PMID:6653539
NASA Astrophysics Data System (ADS)
Bünemann, Jörg; Seibold, Götz
2017-12-01
Pump-probe experiments have turned out as a powerful tool in order to study the dynamics of competing orders in a large variety of materials. The corresponding analysis of the data often relies on standard linear-response theory generalized to nonequilibrium situations. Here we examine the validity of such an approach for the charge and pairing response of systems with charge-density wave and (or) superconducting (SC) order. Our investigations are based on the attractive Hubbard model which we study within the time-dependent Hartree-Fock approximation. In particular, we calculate the quench and pump-probe dynamics for SC and charge order parameters in order to analyze the frequency spectra and the coupling of the probe field to the specific excitations. Our calculations reveal that the "linear-response assumption" is justified for small to moderate nonequilibrium situations (i.e., pump pulses) in the case of a purely charge-ordered ground state. However, the pump-probe dynamics on top of a superconducting ground state is determined by phase and amplitude modes which get coupled far from the equilibrium state indicating the failure of the linear-response assumption.
Standard, Random, and Optimum Array conversions from Two-Pole resistance data
Rucker, D. F.; Glaser, Danney R.
2014-09-01
We present an array evaluation of standard and nonstandard arrays over a hydrogeological target. We develop the arrays by linearly combining data from the pole-pole (or 2-pole) array. The first test shows that reconstructed resistances for the standard Schlumberger and dipoledipole arrays are equivalent or superior to the measured arrays in terms of noise, especially at large geometric factors. The inverse models for the standard arrays also confirm what others have presented in terms of target resolvability, namely the dipole-dipole array has the highest resolution. In the second test, we reconstruct random electrode combinations from the 2-pole data segregated intomore » inner, outer, and overlapping dipoles. The resistance data and inverse models from these randomized arrays show those with inner dipoles to be superior in terms of noise and resolution and that overlapping dipoles can cause model instability and low resolution. Finally, we use the 2-pole data to create an optimized array that maximizes the model resolution matrix for a given electrode geometry. The optimized array produces the highest resolution and target detail. Thus, the tests demonstrate that high quality data and high model resolution can be achieved by acquiring field data from the pole-pole array.« less
Regression Analysis of Top of Descent Location for Idle-thrust Descents
NASA Technical Reports Server (NTRS)
Stell, Laurel; Bronsvoort, Jesper; McDonald, Greg
2013-01-01
In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. The independent variables cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also recorded or computed post-operations. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajec- tory parameters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowl- edge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace. In particular, a model for TOD location that is linear in the independent variables would enable decision support tool human-machine interfaces for which a kinetic approach would be computationally too slow.
Introducing linear functions: an alternative statistical approach
NASA Astrophysics Data System (ADS)
Nolan, Caroline; Herbert, Sandra
2015-12-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.
A continuous damage model based on stepwise-stress creep rupture tests
NASA Technical Reports Server (NTRS)
Robinson, D. N.
1985-01-01
A creep damage accumulation model is presented that makes use of the Kachanov damage rate concept with a provision accounting for damage that results from a variable stress history. This is accomplished through the introduction of an additional term in the Kachanov rate equation that is linear in the stress rate. Specification of the material functions and parameters in the model requires two types of constituting a data base: (1) standard constant-stress creep rupture tests, and (2) a sequence of two-step creep rupture tests.
Boundary-element modelling of dynamics in external poroviscoelastic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Litvinchuk, S. Yu; Ipatov, A. A.; Petrov, A. N.
2018-04-01
A problem of a spherical cavity in porous media is considered. Porous media are assumed to be isotropic poroelastic or isotropic poroviscoelastic. The poroviscoelastic formulation is treated as a combination of Biot’s theory of poroelasticity and elastic-viscoelastic correspondence principle. Such viscoelastic models as Kelvin–Voigt, Standard linear solid, and a model with weakly singular kernel are considered. Boundary field study is employed with the help of the boundary element method. The direct approach is applied. The numerical scheme is based on the collocation method, regularized boundary integral equation, and Radau stepped scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
High Speed Civil Transport Aircraft Simulation: Reference-H Cycle 1, MATLAB Implementation
NASA Technical Reports Server (NTRS)
Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S.
1999-01-01
The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation.
Confidence limits for data mining models of options prices
NASA Astrophysics Data System (ADS)
Healy, J. V.; Dixon, M.; Read, B. J.; Cai, F. F.
2004-12-01
Non-parametric methods such as artificial neural nets can successfully model prices of financial options, out-performing the Black-Scholes analytic model (Eur. Phys. J. B 27 (2002) 219). However, the accuracy of such approaches is usually expressed only by a global fitting/error measure. This paper describes a robust method for determining prediction intervals for models derived by non-linear regression. We have demonstrated it by application to a standard synthetic example (29th Annual Conference of the IEEE Industrial Electronics Society, Special Session on Intelligent Systems, pp. 1926-1931). The method is used here to obtain prediction intervals for option prices using market data for LIFFE “ESX” FTSE 100 index options ( http://www.liffe.com/liffedata/contracts/month_onmonth.xls). We avoid special neural net architectures and use standard regression procedures to determine local error bars. The method is appropriate for target data with non constant variance (or volatility).
Higgs potential from derivative interactions
NASA Astrophysics Data System (ADS)
Quadri, A.
2017-06-01
A formulation of the linear σ model with derivative interactions is studied. The classical theory is on-shell equivalent to the σ model with the standard quartic Higgs potential. The mass of the scalar mode only appears in the quadratic part and not in the interaction vertices, unlike in the ordinary formulation of the theory. Renormalization of the model is discussed. A nonpower-counting renormalizable extension, obeying the defining functional identities of the theory, is presented. This extension is physically equivalent to the tree-level inclusion of a dimension-six effective operator ∂μ(Φ†Φ)∂μ(Φ†Φ). The resulting UV divergences are arranged in a perturbation series around the power-counting renormalizable theory. The application of the formalism to the Standard Model in the presence of the dimension-six operator ∂μ(Φ†Φ)∂μ(Φ†Φ) is discussed.
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solares, Santiago D.
Significant progress has been accomplished in the development of experimental contact-mode and dynamic-mode atomic force microscopy (AFM) methods designed to measure surface material properties. However, current methods are based on one-dimensional (1D) descriptions of the tip-sample interaction forces, thus neglecting the intricacies involved in the material behavior of complex samples (such as soft viscoelastic materials) as well as the differences in material response between the surface and the bulk. In order to begin to address this gap, a computational study is presented where the sample is simulated using an enhanced version of a recently introduced model that treats the surfacemore » as a collection of standard-linear-solid viscoelastic elements. The enhanced model introduces in-plane surface elastic forces that can be approximately related to a two-dimensional (2D) Young's modulus. Relevant cases are discussed for single-and multifrequency intermittent-contact AFM imaging, with focus on the calculated surface indentation profiles and tip-sample interaction force curves, as well as their implications with regards to experimental interpretation. A variety of phenomena are examined in detail, which highlight the need for further development of more physically accurate sample models that are specifically designed for AFM simulation. As a result, a multifrequency AFM simulation tool based on the above sample model is provided as supporting information.« less
AITRAC: Augmented Interactive Transient Radiation Analysis by Computer. User's information manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-10-01
AITRAC is a program designed for on-line, interactive, DC, and transient analysis of electronic circuits. The program solves linear and nonlinear simultaneous equations which characterize the mathematical models used to predict circuit response. The program features 100 external node--200 branch capability; conversional, free-format input language; built-in junction, FET, MOS, and switch models; sparse matrix algorithm with extended-precision H matrix and T vector calculations, for fast and accurate execution; linear transconductances: beta, GM, MU, ZM; accurate and fast radiation effects analysis; special interface for user-defined equations; selective control of multiple outputs; graphical outputs in wide and narrow formats; and on-line parametermore » modification capability. The user describes the problem by entering the circuit topology and part parameters. The program then automatically generates and solves the circuit equations, providing the user with printed or plotted output. The circuit topology and/or part values may then be changed by the user, and a new analysis, requested. Circuit descriptions may be saved on disk files for storage and later use. The program contains built-in standard models for resistors, voltage and current sources, capacitors, inductors including mutual couplings, switches, junction diodes and transistors, FETS, and MOS devices. Nonstandard models may be constructed from standard models or by using the special equations interface. Time functions may be described by straight-line segments or by sine, damped sine, and exponential functions. 42 figures, 1 table. (RWR)« less
Solares, Santiago D.
2016-04-15
Significant progress has been accomplished in the development of experimental contact-mode and dynamic-mode atomic force microscopy (AFM) methods designed to measure surface material properties. However, current methods are based on one-dimensional (1D) descriptions of the tip-sample interaction forces, thus neglecting the intricacies involved in the material behavior of complex samples (such as soft viscoelastic materials) as well as the differences in material response between the surface and the bulk. In order to begin to address this gap, a computational study is presented where the sample is simulated using an enhanced version of a recently introduced model that treats the surfacemore » as a collection of standard-linear-solid viscoelastic elements. The enhanced model introduces in-plane surface elastic forces that can be approximately related to a two-dimensional (2D) Young's modulus. Relevant cases are discussed for single-and multifrequency intermittent-contact AFM imaging, with focus on the calculated surface indentation profiles and tip-sample interaction force curves, as well as their implications with regards to experimental interpretation. A variety of phenomena are examined in detail, which highlight the need for further development of more physically accurate sample models that are specifically designed for AFM simulation. As a result, a multifrequency AFM simulation tool based on the above sample model is provided as supporting information.« less
Verification of spectrophotometric method for nitrate analysis in water samples
NASA Astrophysics Data System (ADS)
Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu
2017-12-01
The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.
Nilsson, Lars B; Skansen, Patrik
2012-06-30
The investigations in this article were triggered by two observations in the laboratory; for some liquid chromatography/tandem mass spectrometry (LC/MS/MS) systems it was possible to obtain linear calibration curves for extreme concentration ranges and for some systems seemingly linear calibration curves gave good accuracy at low concentrations only when using a quadratic regression function. The absolute and relative responses were tested for three different LC/MS/MS systems by injecting solutions of a model compound and a stable isotope labeled internal standard. The analyte concentration range for the solutions was 0.00391 to 500 μM (128,000×), giving overload of the chromatographic column at the highest concentrations. The stable isotope labeled internal standard concentration was 0.667 μM in all samples. The absolute response per concentration unit decreased rapidly as higher concentrations were injected. The relative response, the ratio for the analyte peak area to the internal standard peak area, per concentration unit was calculated. For system 1, the ionization process was found to limit the response and the relative response per concentration unit was constant. For systems 2 and 3, the ion detection process was the limiting factor resulting in decreasing relative response at increasing concentrations. For systems behaving like system 1, simple linear regression can be used for any concentration range while, for systems behaving like systems 2 and 3, non-linear regression is recommended for all concentration ranges. Another consequence is that the ionization capacity limited systems will be insensitive to matrix ion suppression when an ideal internal standard is used while the detection capacity limited systems are at risk of giving erroneous results at high concentrations if the matrix ion suppression varies for different samples in a run. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2012-01-01
This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Black-hole kicks from numerical-relativity surrogate models
NASA Astrophysics Data System (ADS)
Gerosa, Davide; Hébert, François; Stein, Leo C.
2018-05-01
Binary black holes radiate linear momentum in gravitational waves as they merge. Recoils imparted to the black-hole remnant can reach thousands of km /s , thus ejecting black holes from their host galaxies. We exploit recent advances in gravitational waveform modeling to quickly and reliably extract recoils imparted to generic, precessing, black-hole binaries. Our procedure uses a numerical-relativity surrogate model to obtain the gravitational waveform given a set of binary parameters; then, from this waveform we directly integrate the gravitational-wave linear momentum flux. This entirely bypasses the need for fitting formulas which are typically used to model black-hole recoils in astrophysical contexts. We provide a thorough exploration of the black-hole kick phenomenology in the parameter space, summarizing and extending previous numerical results on the topic. Our extraction procedure is made publicly available as a module for the Python programming language named surrkick. Kick evaluations take ˜0.1 s on a standard off-the-shelf machine, thus making our code ideal to be ported to large-scale astrophysical studies.
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardhan, Jaydeep P.; Knepley, Matthew G.
2014-10-07
We show that charge-sign-dependent asymmetric hydration can be modeled accurately using linear Poisson theory after replacing the standard electric-displacement boundary condition with a simple nonlinear boundary condition. Using a single multiplicative scaling factor to determine atomic radii from molecular dynamics Lennard-Jones parameters, the new model accurately reproduces MD free-energy calculations of hydration asymmetries for: (i) monatomic ions, (ii) titratable amino acids in both their protonated and unprotonated states, and (iii) the Mobley “bracelet” and “rod” test problems [D. L. Mobley, A. E. Barber II, C. J. Fennell, and K. A. Dill, “Charge asymmetries in hydration of polar solutes,” J. Phys.more » Chem. B 112, 2405–2414 (2008)]. Remarkably, the model also justifies the use of linear response expressions for charging free energies. Our boundary-element method implementation demonstrates the ease with which other continuum-electrostatic solvers can be extended to include asymmetry.« less
Akkol, Esra Küpeli; Koca, Ufuk; Pesin, Ipek; Yilmazer, Demet
2011-01-01
Achillea species are widely used for diarrhea, abdominal pain, stomachache and healing of wounds in folk medicine. To evaluate the wound healing activity of the plant, extracts were prepared with different solvents; hexane, chloroform, ethyl acetate and methanol, respectively from the roots of Achillea biebersteinii. Linear incision by using tensiometer and circular excision wound models were employed on mice and rats. The wound healing effect was comparatively evaluated with the standard skin ointment Madecassol. The n-hexane extract treated groups of animals showed 84.2% contraction, which was close to contraction value of the reference drug Madecassol (100%). On the other hand the same extract on incision wound model demonstrated a significant increase (40.1%) in wound tensile strength as compared to other groups. The results of histoptological examination supported the outcome of linear incision and circular excision wound models as well. The experimental data demonstrated that A. biebersteinii displayed remarkable wound healing activity. PMID:19546149
Improved Linear-Ion-Trap Frequency Standard
NASA Technical Reports Server (NTRS)
Prestage, John D.
1995-01-01
Improved design concept for linear-ion-trap (LIT) frequency-standard apparatus proposed. Apparatus contains lengthened linear ion trap, and ions processed alternately in two regions: ions prepared in upper region of trap, then transported to lower region for exposure to microwave radiation, then returned to upper region for optical interrogation. Improved design intended to increase long-term frequency stability of apparatus while reducing size, mass, and cost.
Duality linking standard and tachyon scalar field cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avelino, P. P.; Bazeia, D.; Losano, L.
2010-09-15
In this work we investigate the duality linking standard and tachyon scalar field homogeneous and isotropic cosmologies in N+1 dimensions. We determine the transformation between standard and tachyon scalar fields and between their associated potentials, corresponding to the same background evolution. We show that, in general, the duality is broken at a perturbative level, when deviations from a homogeneous and isotropic background are taken into account. However, we find that for slow-rolling fields the duality is still preserved at a linear level. We illustrate our results with specific examples of cosmological relevance, where the correspondence between scalar and tachyon scalarmore » field models can be calculated explicitly.« less
Perturbation theory for cosmologies with nonlinear structure
NASA Astrophysics Data System (ADS)
Goldberg, Sophia R.; Gallagher, Christopher S.; Clifton, Timothy
2017-11-01
The next generation of cosmological surveys will operate over unprecedented scales, and will therefore provide exciting new opportunities for testing general relativity. The standard method for modelling the structures that these surveys will observe is to use cosmological perturbation theory for linear structures on horizon-sized scales, and Newtonian gravity for nonlinear structures on much smaller scales. We propose a two-parameter formalism that generalizes this approach, thereby allowing interactions between large and small scales to be studied in a self-consistent and well-defined way. This uses both post-Newtonian gravity and cosmological perturbation theory, and can be used to model realistic cosmological scenarios including matter, radiation and a cosmological constant. We find that the resulting field equations can be written as a hierarchical set of perturbation equations. At leading-order, these equations allow us to recover a standard set of Friedmann equations, as well as a Newton-Poisson equation for the inhomogeneous part of the Newtonian energy density in an expanding background. For the perturbations in the large-scale cosmology, however, we find that the field equations are sourced by both nonlinear and mode-mixing terms, due to the existence of small-scale structures. These extra terms should be expected to give rise to new gravitational effects, through the mixing of gravitational modes on small and large scales—effects that are beyond the scope of standard linear cosmological perturbation theory. We expect our formalism to be useful for accurately modeling gravitational physics in universes that contain nonlinear structures, and for investigating the effects of nonlinear gravity in the era of ultra-large-scale surveys.
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
On the r-mode spectrum of relativistic stars: the inclusion of the radiation reaction
NASA Astrophysics Data System (ADS)
Ruoff, Johannes; Kokkotas, Kostas D.
2002-03-01
We consider both mode calculations and time-evolutions of axial r modes for relativistic uniformly rotating non-barotropic neutron stars, using the slow-rotation formalism, in which rotational corrections are considered up to linear order in the angular velocity Ω. We study various stellar models, such as uniform density models, polytropic models with different polytropic indices n, and some models based on realistic equations of state. For weakly relativistic uniform density models and polytropes with small values of n, we can recover the growth times predicted from Newtonian theory when standard multipole formulae for the gravitational radiation are used. However, for more compact models, we find that relativistic linear perturbation theory predicts a weakening of the instability compared to the Newtonian results. When turning to polytropic equations of state, we find that for certain ranges of the polytropic index n, the r mode disappears, and instead of a growth, the time-evolutions show a rapid decay of the amplitude. This is clearly at variance with the Newtonian predictions. It is, however, fully consistent with our previous results obtained in the low-frequency approximation.
A finite nonlinear hyper-viscoelastic model for soft biological tissues.
Panda, Satish Kumar; Buist, Martin Lindsay
2018-03-01
Soft tissues exhibit highly nonlinear rate and time-dependent stress-strain behaviour. Strain and strain rate dependencies are often modelled using a hyperelastic model and a discrete (standard linear solid) or continuous spectrum (quasi-linear) viscoelastic model, respectively. However, these models are unable to properly capture the materials characteristics because hyperelastic models are unsuited for time-dependent events, whereas the common viscoelastic models are insufficient for the nonlinear and finite strain viscoelastic tissue responses. The convolution integral based models can demonstrate a finite viscoelastic response; however, their derivations are not consistent with the laws of thermodynamics. The aim of this work was to develop a three-dimensional finite hyper-viscoelastic model for soft tissues using a thermodynamically consistent approach. In addition, a nonlinear function, dependent on strain and strain rate, was adopted to capture the nonlinear variation of viscosity during a loading process. To demonstrate the efficacy and versatility of this approach, the model was used to recreate the experimental results performed on different types of soft tissues. In all the cases, the simulation results were well matched (R 2 ⩾0.99) with the experimental data. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
The performance of projective standardization for digital subtraction radiography.
Mol, André; Dunn, Stanley M
2003-09-01
We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.
Non-Gaussian bias: insights from discrete density peaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch
2013-09-01
Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less
Fourth standard model family neutrino at future linear colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciftci, A.K.; Ciftci, R.; Sultansoy, S.
2005-09-01
It is known that flavor democracy favors the existence of the fourth standard model (SM) family. In order to give nonzero masses for the first three-family fermions flavor democracy has to be slightly broken. A parametrization for democracy breaking, which gives the correct values for fundamental fermion masses and, at the same time, predicts quark and lepton Cabibbo-Kobayashi-Maskawa (CKM) matrices in a good agreement with the experimental data, is proposed. The pair productions of the fourth SM family Dirac ({nu}{sub 4}) and Majorana (N{sub 1}) neutrinos at future linear colliders with {radical}(s)=500 GeV, 1 TeV, and 3 TeV are considered.more » The cross section for the process e{sup +}e{sup -}{yields}{nu}{sub 4}{nu}{sub 4}(N{sub 1}N{sub 1}) and the branching ratios for possible decay modes of the both neutrinos are determined. The decays of the fourth family neutrinos into muon channels ({nu}{sub 4}(N{sub 1}){yields}{mu}{sup {+-}}W{sup {+-}}) provide cleanest signature at e{sup +}e{sup -} colliders. Meanwhile, in our parametrization this channel is dominant. W bosons produced in decays of the fourth family neutrinos will be seen in detector as either di-jets or isolated leptons. As an example, we consider the production of 200 GeV mass fourth family neutrinos at {radical}(s)=500 GeV linear colliders by taking into account di-muon plus four jet events as signatures.« less
NASA Astrophysics Data System (ADS)
Nieto, P. J. García; del Coz Díaz, J. J.; Vilán, J. A. Vilán; Placer, C. Casqueiro
2009-08-01
In this paper, an evaluation of distribution of the air pressure is determined throughout the laterally closed industrial buildings with curved metallic roofs due to the wind effect by the finite element method (FEM). The non-linearity is due to Reynolds-averaged Navier-Stokes (RANS) equations that govern the turbulent flow. The Navier-Stokes equations are non-linear partial differential equations and this non-linearity makes most problems difficult to solve and is part of the cause of turbulence. The RANS equations are time-averaged equations of motion for fluid flow. They are primarily used while dealing with turbulent flows. Turbulence is a highly complex physical phenomenon that is pervasive in flow problems of scientific and engineering concern like this one. In order to solve the RANS equations a two-equation model is used: the standard k-ɛ model. The calculation has been carried out keeping in mind the following assumptions: turbulent flow, an exponential-like wind speed profile with a maximum velocity of 40 m/s at 10 m reference height, and different heights of the building ranging from 6 to 10 meters. Finally, the forces and moments are determined on the cover, as well as the distribution of pressures on the same one, comparing the numerical results obtained with the Spanish CTE DB SE-AE, Spanish NBE AE-88 and European standard rules, giving place to the conclusions that are exposed in the study.
Xu, Yetong; Zeng, Zhikai; Xu, Xiao; Tian, Qiyu; Ma, Xiaokang; Long, Shenfei; Piao, Meijing; Cheng, Zhibin; Piao, Xiangshu
2017-08-01
To determine the effects of standardized ileal digestible (SID) valine : lysine ratio on the performance, milk composition and plasma indices of lactating sows, 32 Large White × Landrace sows (219.78 ± 7.15 kg body weight; parity 1.82 ± 0.62) were allotted to one of four dietary treatments with eight sows per treatment based on parity, back fat thickness and body weight. The sows were fed corn-soybean meal-based diets containing 63, 83, 103 or 123% SID valine : lysine from day 107 of gestation until day 28 of lactation. The average daily feed intake of sows and daily weight gain of piglets increased linearly (P < 0.05) while back fat loss decreased linearly (P < 0.05) as the SID valine : lysine ratio increased. All of the analyzed amino acids in sow colostrum and valine concentrations of sow and piglet plasma increased linearly (P < 0.05) with the increasing SID valine : lysine ratio. In conclusion, 88 and 113% dietary SID valine : lysine ratios were optimal to achieve minimum back fat loss and maximum piglet growth rate using a linear-break point model which exceeds the requirement of 85% that is estimated by the National Research Council (2012). © 2016 Japanese Society of Animal Science.
Quantum algorithm for linear regression
NASA Astrophysics Data System (ADS)
Wang, Guoming
2017-07-01
We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.
Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895
Dynamic Response and Residual Helmet Liner Crush Using Cadaver Heads and Standard Headforms.
Bonin, S J; Luck, J F; Bass, C R; Gardiner, J C; Onar-Thomas, A; Asfour, S S; Siegmund, G P
2017-03-01
Biomechanical headforms are used for helmet certification testing and reconstructing helmeted head impacts; however, their biofidelity and direct applicability to human head and helmet responses remain unclear. Dynamic responses of cadaver heads and three headforms and residual foam liner deformations were compared during motorcycle helmet impacts. Instrumented, helmeted heads/headforms were dropped onto the forehead region against an instrumented flat anvil at 75, 150, and 195 J. Helmets were CT scanned to quantify maximum liner crush depth and crush volume. General linear models were used to quantify the effect of head type and impact energy on linear acceleration, head injury criterion (HIC), force, maximum liner crush depth, and liner crush volume and regression models were used to quantify the relationship between acceleration and both maximum crush depth and crush volume. The cadaver heads generated larger peak accelerations than all three headforms, larger HICs than the International Organization for Standardization (ISO), larger forces than the Hybrid III and ISO, larger maximum crush depth than the ISO, and larger crush volumes than the DOT. These significant differences between the cadaver heads and headforms need to be accounted for when attempting to estimate an impact exposure using a helmet's residual crush depth or volume.
ERIC Educational Resources Information Center
British Standards Institution, London (England).
To promote interchangeability of teaching machines and programs, so that the user is not so limited in his choice of programs, the British Standards Institute has offered a standard. Part I of the standard deals with linear teaching machines and programs that make use of the roll or sheet methods of presentation. Requirements cover: spools,…
On the equivalence of case-crossover and time series methods in environmental epidemiology.
Lu, Yun; Zeger, Scott L
2007-04-01
The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.
Unscented Kalman Filter for Brain-Machine Interfaces
Li, Zheng; O'Doherty, Joseph E.; Hanson, Timothy L.; Lebedev, Mikhail A.; Henriquez, Craig S.; Nicolelis, Miguel A. L.
2009-01-01
Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation. PMID:19603074
Linearity-Preserving Limiters on Irregular Grids
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael; Murman, Scott
2004-01-01
This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. We note that on non-uniform grids the scalar formulation in standard use today sacrifices k-exactness, even for linear solutions, impacting both accuracy and convergence. We rewrite some well-known limiters in a n way to highlight their underlying symmetry, and use this to examine both traditional and novel limiter formulations. A consistent method of handling stretched meshes is developed, as is a new directional formulation in multiple dimensions for irregular grids. Results are presented demonstrating improved accuracy and convergence using a combination of model problems and complex three-dimensional examples.
NASA Astrophysics Data System (ADS)
Tiunov, V. V.
2018-02-01
The report provides results of the research related to the tubular linear induction motors’ application. The motors’ design features, a calculation model, a description of test specimens for mining and electric power industry are introduced. The most attention is given to the single-phase motors for high voltage switches drives with the usage of inexpensive standard single-phase transformers for motors’ power supply. The method of the motor’s parameters determination, when the motor is being fed from the transformer, working in the overload mode, was described, and the results of it practical usage were good enough for the engineering practice.
Arnold, Suzanne V.; Masoudi, Frederick A.; Rumsfeld, John S.; Li, Yan; Jones, Philip G.; Spertus, John A.
2014-01-01
Background Before outcomes-based measures of quality can be used to compare and improve care, they must be risk-standardized to account for variations in patient characteristics. Despite the importance of health-related quality of life (HRQL) outcomes among patients with acute myocardial infarction (AMI), no risk-standardized models have been developed. Methods and Results We assessed disease-specific HRQL using the Seattle Angina Questionnaire at baseline and 1 year later in 2693 unselected AMI patients from 24 hospitals enrolled in the TRIUMPH registry. Using 57 candidate sociodemographic, economic, and clinical variables present on admission, we developed a parsimonious, hierarchical linear regression model to predict HRQL. Eleven variables were independently associated with poor HRQL after AMI, including younger age, prior CABG, depressive symptoms, and financial difficulties (R2=20%). The model demonstrated excellent internal calibration and reasonable calibration in an independent sample of 1890 AMI patients in a separate registry, although the model slightly over-predicted HRQL scores in the higher deciles. Among the 24 TRIUMPH hospitals, 1-year unadjusted HRQL scores ranged from 67–89. After risk-standardization, HRQL scores variability narrowed substantially (range=79–83), and the group of hospital performance (bottom 20%/middle 60%/top 20%) changed in 14 of the 24 hospitals (58% reclassification with risk-standardization). Conclusions In this predictive model for HRQL after AMI, we identified risk factors, including economic and psychological characteristics, associated with HRQL outcomes. Adjusting for these factors substantially altered the rankings of hospitals as compared with unadjusted comparisons. Using this model to compare risk-standardized HRQL outcomes across hospitals may identify processes of care that maximize this important patient-centered outcome. PMID:24163068
NASA Astrophysics Data System (ADS)
Müller-Schauenburg, Wolfgang; Reimold, Matthias
Positron Emission Tomography is a well-established technique that allows imaging and quantification of tissue properties in-vivo. The goal of pharmacokinetic modelling is to estimate physiological parameters, e.g. perfusion or receptor density from the measured time course of a radiotracer. After a brief overview of clinical application of PET, we summarize the fundamentals of modelling: distribution volume, Fick's principle of local balancing, extraction and perfusion, and how to calculate equilibrium data from measurements after bolus injection. Three fundamental models are considered: (i) the 1-tissue compartment model, e.g. for regional cerebral blood flow (rCBF) with the short-lived tracer [15O]water, (ii) the 2-tissue compartment model accounting for trapping (one exponential + constant), e.g. for glucose metabolism with [18F]FDG, (iii) the reversible 2-tissue compartment model (two exponentials), e.g. for receptor binding. Arterial blood sampling is required for classical PET modelling, but can often be avoided by comparing regions with specific binding with so called reference regions with negligible specific uptake, e.g. in receptor imaging. To estimate the model parameters, non-linear least square fits are the standard. Various linearizations have been proposed for rapid parameter estimation, e.g. on a pixel-by-pixel basis, for the prize of a bias. Such linear approaches exist for all three models; e.g. the PATLAK-plot for trapping substances like FDG, and the LOGAN-plot to obtain distribution volumes for reversibly binding tracers. The description of receptor modelling is dedicated to the approaches of the subsequent lecture (chapter) of Millet, who works in the tradition of Delforge with multiple-injection investigations.
NASA Astrophysics Data System (ADS)
Alam, N. M.; Sharma, G. C.; Moreira, Elsa; Jana, C.; Mishra, P. K.; Sharma, N. K.; Mandal, D.
2017-08-01
Markov chain and 3-dimensional log-linear models were attempted to model drought class transitions derived from the newly developed drought index the Standardized Precipitation Evapotranspiration Index (SPEI) at a 12 month time scale for six major drought prone areas of India. Log-linear modelling approach has been used to investigate differences relative to drought class transitions using SPEI-12 time series derived form 48 yeas monthly rainfall and temperature data. In this study, the probabilities of drought class transition, the mean residence time, the 1, 2 or 3 months ahead prediction of average transition time between drought classes and the drought severity class have been derived. Seasonality of precipitation has been derived for non-homogeneous Markov chains which could be used to explain the effect of the potential retreat of drought. Quasi-association and Quasi-symmetry log-linear models have been fitted to the drought class transitions derived from SPEI-12 time series. The estimates of odds along with their confidence intervals were obtained to explain the progression of drought and estimation of drought class transition probabilities. For initial months as the drought severity increases the calculated odds shows lower value and the odds decreases for the succeeding months. This indicates that the ratio of expected frequencies of occurrence of transition from drought class to the non-drought class decreases as compared to transition to any drought class when the drought severity of the present class increases. From 3-dimensional log-linear model it is clear that during the last 24 years the drought probability has increased for almost all the six regions. The findings from the present study will immensely help to assess the impact of drought on the gross primary production and to develop future contingent planning in similar regions worldwide.
A flexible count data regression model for risk analysis.
Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P
2008-02-01
In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.
Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O
2017-02-01
One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.
SU-E-T-186: Cloud-Based Quality Assurance Application for Linear Accelerator Commissioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, J
2015-06-15
Purpose: To identify anomalies and safety issues during data collection and modeling for treatment planning systems Methods: A cloud-based quality assurance system (AQUIRE - Automated QUalIty REassurance) has been developed to allow the uploading and analysis of beam data aquired during the treatment planning system commissioning process. In addition to comparing and aggregating measured data, tools have also been developed to extract dose from the treatment planning system for end-to-end testing. A gamma index is perfomed on the data to give a dose difference and distance-to-agreement for validation that a beam model is generating plans consistent with the beam datamore » collection. Results: Over 20 linear accelerators have been commissioning using this platform, and a variety of errors and potential saftey issues have been caught through the validation process. For example, the gamma index of 2% dose, 2mm DTA is quite sufficient to see curves not corrected for effective point of measurement. Also, data imported into the database is analyzed against an aggregate of similar linear accelerators to show data points that are outliers. The resulting curves in the database exhibit a very small standard deviation and imply that a preconfigured beam model based on aggregated linear accelerators will be sufficient in most cases. Conclusion: With the use of this new platform for beam data commissioning, errors in beam data collection and treatment planning system modeling are greatly reduced. With the reduction in errors during acquisition, the resulting beam models are quite similar, suggesting that a common beam model may be possible in the future. Development is ongoing to create routine quality assurance tools to compare back to the beam data acquired during commissioning. I am a medical physicist for Alzyen Medical Physics, and perform commissioning services.« less
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Investigation of interaction femtosecond laser pulses with skin and eyes mathematical model
NASA Astrophysics Data System (ADS)
Rogov, P. U.; Smirnov, S. V.; Semenova, V. A.; Melnik, M. V.; Bespalov, V. G.
2016-08-01
We present a mathematical model of linear and nonlinear processes that takes place under the action of femtosecond laser radiation on the cutaneous covering. The study is carried out and the analytical solution of the set of equations describing the dynamics of the electron and atomic subsystems and investigated the processes of linear and nonlinear interaction of femtosecond laser pulses in the vitreous of the human eye, revealed the dependence of the pulse duration on the retina of the duration of the input pulse and found the value of the radiation power density, in which there is a self-focusing is obtained. The results of the work can be used to determine the maximum acceptable energy, generated by femtosecond laser systems, and to develop Russian laser safety standards for femtosecond laser systems.
Tsujimura, Akira; Hiramatsu, Ippei; Aoki, Yusuke; Shimoyama, Hirofumi; Mizuno, Taiki; Nozaki, Taiji; Shirai, Masato; Kobayashi, Kazuhiro; Kumamoto, Yoshiaki; Horie, Shigeo
2017-06-01
Atherosclerosis is a systematic disease in which plaque builds up inside the arteries that can lead to serious problems related to quality of life (QOL). Lower urinary tract symptoms (LUTS), erectile dysfunction (ED), and late-onset hypogonadism (LOH) are highly prevalent in aging men and are significantly associated with a reduced QOL. However, few questionnaire-based studies have fully examined the relation between atherosclerosis and several urological symptoms. The study comprised 303 outpatients who visited our clinic with symptoms of LOH. Several factors influencing atherosclerosis, including serum concentrations of triglyceride, fasting blood sugar, and total testosterone measured by radioimmunoassay, were investigated. We also measured brachial-ankle pulse wave velocity (baPWV) and assessed symptoms by specific questionnaires, including the Sexual Health Inventory for Men (SHIM), Erection Hardness Score (EHS), International Prostate Symptom Score (IPSS), QOL index, and Aging Male Symptoms rating scale (AMS). Stepwise associations between the ratio of measured/age standard baPWV and clinical factors including laboratory data and the scores of the questionnaires were compared using the Jonckheere-Terpstra test for trend. The associations between the ratio of measured/age standard baPWV and each IPSS score were assessed in a multivariate linear regression model after adjustment for serum triglyceride, fasting blood sugar, and total testosterone. Regarding ED, a higher level of the ratio of measured/age standard baPWV was associated with a lower EHS, whereas no association was found with SHIM. Regarding LUTS, a higher ratio of measured/age standard baPWV was associated with a higher IPSS and QOL index. However, there was no statistically significant difference between the ratio of measured/age standard baPWV and AMS. A multivariate linear regression model showed only nocturia to be associated with the ratio of measured/age standard baPWV for each IPSS score. Atherosclerosis is associated with erectile function and LUTS, especially nocturia.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
Development of quantitative screen for 1550 chemicals with GC-MS.
Bergmann, Alan J; Points, Gary L; Scott, Richard P; Wilson, Glenn; Anderson, Kim A
2018-05-01
With hundreds of thousands of chemicals in the environment, effective monitoring requires high-throughput analytical techniques. This paper presents a quantitative screening method for 1550 chemicals based on statistical modeling of responses with identification and integration performed using deconvolution reporting software. The method was evaluated with representative environmental samples. We tested biological extracts, low-density polyethylene, and silicone passive sampling devices spiked with known concentrations of 196 representative chemicals. A multiple linear regression (R 2 = 0.80) was developed with molecular weight, logP, polar surface area, and fractional ion abundance to predict chemical responses within a factor of 2.5. Linearity beyond the calibration had R 2 > 0.97 for three orders of magnitude. Median limits of quantitation were estimated to be 201 pg/μL (1.9× standard deviation). The number of detected chemicals and the accuracy of quantitation were similar for environmental samples and standard solutions. To our knowledge, this is the most precise method for the largest number of semi-volatile organic chemicals lacking authentic standards. Accessible instrumentation and software make this method cost effective in quantifying a large, customizable list of chemicals. When paired with silicone wristband passive samplers, this quantitative screen will be very useful for epidemiology where binning of concentrations is common. Graphical abstract A multiple linear regression of chemical responses measured with GC-MS allowed quantitation of 1550 chemicals in samples such as silicone wristbands.
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
Generating Linear Equations Based on Quantitative Reasoning
ERIC Educational Resources Information Center
Lee, Mi Yeon
2017-01-01
The Common Core's Standards for Mathematical Practice encourage teachers to develop their students' ability to reason abstractly and quantitatively by helping students make sense of quantities and their relationships within problem situations. The seventh-grade content standards include objectives pertaining to developing linear equations in…
Column Chromatography To Obtain Organic Cation Sorption Isotherms.
Jolin, William C; Sullivan, James; Vasudevan, Dharni; MacKay, Allison A
2016-08-02
Column chromatography was evaluated as a method to obtain organic cation sorption isotherms for environmental solids while using the peak skewness to identify the linear range of the sorption isotherm. Custom packed HPLC columns and standard batch sorption techniques were used to intercompare sorption isotherms and solid-water sorption coefficients (Kd) for four organic cations (benzylamine, 2,4-dichlorobenzylamine, phenyltrimethylammonium, oxytetracycline) with two aluminosilicate clay minerals and one soil. A comparison of Freundlich isotherm parameters revealed isotherm linearity or nonlinearity was not significantly different between column chromatography and traditional batch experiments. Importantly, skewness (a metric of eluting peak symmetry) analysis of eluting peaks can establish isotherm linearity, thereby enabling a less labor intensive means to generate the extensive data sets of linear Kd values required for the development of predictive sorption models. Our findings clearly show that column chromatography can reproduce sorption measures from conventional batch experiments with the benefit of lower labor-intensity, faster analysis times, and allow for consistent sorption measures across laboratories with distinct chromatography instrumentation.
Analysis of Slope Limiters on Irregular Grids
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael J.
2005-01-01
This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. Many slope limiters in standard use do not preserve linear solutions on irregular grids impacting both accuracy and convergence. We rewrite some well-known limiters to highlight their underlying symmetry, and use this form to examine the proper - ties of both traditional and novel limiter formulations on non-uniform meshes. A consistent method of handling stretched meshes is developed which is both linearity preserving for arbitrary mesh stretchings and reduces to common limiters on uniform meshes. In multiple dimensions we analyze the monotonicity region of the gradient vector and show that the multidimensional limiting problem may be cast as the solution of a linear programming problem. For some special cases we present a new directional limiting formulation that preserves linear solutions in multiple dimensions on irregular grids. Computational results using model problems and complex three-dimensional examples are presented, demonstrating accuracy, monotonicity and robustness.
NASA Astrophysics Data System (ADS)
Nasir, Rizal E. M.; Ali, Zurriati; Kuntjoro, Wahyu; Wisnoe, Wirachman
2012-06-01
Previous wind tunnel test has proven the improved aerodynamic charasteristics of Baseline-II E-2 Blended Wing-Body (BWB) aircraft studied in Universiti Teknologi Mara. The E-2 is a version of Baseline-II BWB with modified outer wing and larger canard, solely-designed to gain favourable longitudinal static stability during flight. This paper highlights some results from current investigation on the said aircraft via computational fluid dynamics simulation as a mean to validate the wind tunnel test results. The simulation is conducted based on standard one-equation turbulence, Spalart-Allmaras model with polyhedral mesh. The ambience of the flight simulation is made based on similar ambience of wind tunnel test. The simulation shows lift, drag and moment results to be near the values found in wind tunnel test but only within angles of attack where the lift change is linear. Beyond the linear region, clear differences between computational simulation and wind tunnel test results are observed. It is recommended that different type of mathematical model be used to simulate flight conditions beyond linear lift region.
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Qidwai, Tabish; Yadav, Dharmendra K; Khan, Feroz; Dhawan, Sangeeta; Bhakuni, R S
2012-01-01
This work presents the development of quantitative structure activity relationship (QSAR) model to predict the antimalarial activity of artemisinin derivatives. The structures of the molecules are represented by chemical descriptors that encode topological, geometric, and electronic structure features. Screening through QSAR model suggested that compounds A24, A24a, A53, A54, A62 and A64 possess significant antimalarial activity. Linear model is developed by the multiple linear regression method to link structures to their reported antimalarial activity. The correlation in terms of regression coefficient (r(2)) was 0.90 and prediction accuracy of model in terms of cross validation regression coefficient (rCV(2)) was 0.82. This study indicates that chemical properties viz., atom count (all atoms), connectivity index (order 1, standard), ring count (all rings), shape index (basic kappa, order 2), and solvent accessibility surface area are well correlated with antimalarial activity. The docking study showed high binding affinity of predicted active compounds against antimalarial target Plasmepsins (Plm-II). Further studies for oral bioavailability, ADMET and toxicity risk assessment suggest that compound A24, A24a, A53, A54, A62 and A64 exhibits marked antimalarial activity comparable to standard antimalarial drugs. Later one of the predicted active compound A64 was chemically synthesized, structure elucidated by NMR and in vivo tested in multidrug resistant strain of Plasmodium yoelii nigeriensis infected mice. The experimental results obtained agreed well with the predicted values.
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.
2013-09-01
Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent, The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data-sets are made available to facilitate the process of model evaluation and scheme intercomparison.
NASA Astrophysics Data System (ADS)
Lauritzen, P. H.; Ullrich, P. A.; Jablonowski, C.; Bosler, P. A.; Calhoun, D.; Conley, A. J.; Enomoto, T.; Dong, L.; Dubey, S.; Guba, O.; Hansen, A. B.; Kaas, E.; Kent, J.; Lamarque, J.-F.; Prather, M. J.; Reinert, D.; Shashkin, V. V.; Skamarock, W. C.; Sørensen, B.; Taylor, M. A.; Tolstykh, M. A.
2014-01-01
Recently, a standard test case suite for 2-D linear transport on the sphere was proposed to assess important aspects of accuracy in geophysical fluid dynamics with a "minimal" set of idealized model configurations/runs/diagnostics. Here we present results from 19 state-of-the-art transport scheme formulations based on finite-difference/finite-volume methods as well as emerging (in the context of atmospheric/oceanographic sciences) Galerkin methods. Discretization grids range from traditional regular latitude-longitude grids to more isotropic domain discretizations such as icosahedral and cubed-sphere tessellations of the sphere. The schemes are evaluated using a wide range of diagnostics in idealized flow environments. Accuracy is assessed in single- and two-tracer configurations using conventional error norms as well as novel diagnostics designed for climate and climate-chemistry applications. In addition, algorithmic considerations that may be important for computational efficiency are reported on. The latter is inevitably computing platform dependent. The ensemble of results from a wide variety of schemes presented here helps shed light on the ability of the test case suite diagnostics and flow settings to discriminate between algorithms and provide insights into accuracy in the context of global atmospheric/ocean modeling. A library of benchmark results is provided to facilitate scheme intercomparison and model development. Simple software and data sets are made available to facilitate the process of model evaluation and scheme intercomparison.
Determinants of weight gain in the action to control cardiovascular risk in diabetes trial.
Fonseca, Vivian; McDuffie, Roberta; Calles, Jorge; Cohen, Robert M; Feeney, Patricia; Feinglos, Mark; Gerstein, Hertzel C; Ismail-Beigi, Faramarz; Morgan, Timothy M; Pop-Busui, Rodica; Riddle, Matthew C
2013-08-01
Identify determinants of weight gain in people with type 2 diabetes mellitus (T2DM) allocated to intensive versus standard glycemic control in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial. We studied determinants of weight gain over 2 years in 8,929 participants (4,425 intensive arm and 4,504 standard arm) with T2DM in the ACCORD trial. We used general linear models to examine the association between each baseline characteristic and weight change at the 2-year visit. We fit a linear regression of change in weight and A1C and used general linear models to examine the association between each medication at baseline and weight change at the 2-year visit, stratified by glycemia allocation. There was significantly more weight gain in the intensive glycemia arm of the trial compared with the standard arm (3.0 ± 7.0 vs. 0.3 ± 6.3 kg). On multivariate analysis, younger age, male sex, Asian race, no smoking history, high A1C, baseline BMI of 25-35, high waist circumference, baseline insulin use, and baseline metformin use were independently associated with weight gain over 2 years. Reduction of A1C from baseline was consistently associated with weight gain only when baseline A1C was elevated. Medication usage accounted for <15% of the variability of weight change, with initiation of thiazolidinedione (TZD) use the most prominent factor. Intensive participants who never took insulin or a TZD had an average weight loss of 2.9 kg during the first 2 years of the trial. In contrast, intensive participants who had never previously used insulin or TZD but began this combination after enrolling in the ACCORD trial had a weight gain of 4.6-5.3 kg at 2 years. Weight gain in ACCORD was greater with intensive than with standard treatment and generally associated with reduction of A1C from elevated baseline values. Initiation of TZD and/or insulin therapy was the most important medication-related factor associated with weight gain.
Gao, Xiaoli; Zhang, Qibin; Meng, Da; Issac, Giorgis; Zhao, Rui; Fillmore, Thomas L.; Chu, Rosey K.; Zhou, Jianying; Tang, Keqi; Hu, Zeping; Moore, Ronald J.; Smith, Richard D.; Katze, Michael G.; Metz, Thomas O.
2012-01-01
Lipidomics is a critical part of metabolomics and aims to study all the lipids within a living system. We present here the development and evaluation of a sensitive capillary UPLC-MS method for comprehensive top-down/bottom-up lipid profiling. Three different stationary phases were evaluated in terms of peak capacity, linearity, reproducibility, and limit of quantification (LOQ) using a mixture of lipid standards representative of the lipidome. The relative standard deviations of the retention times and peak abundances of the lipid standards were 0.29% and 7.7%, respectively, when using the optimized method. The linearity was acceptable at >0.99 over 3 orders of magnitude, and the LOQs were sub-fmol. To demonstrate the performance of the method in the analysis of complex samples, we analyzed lipids extracted from a human cell line, rat plasma, and a model human skin tissue, identifying 446, 444, and 370 unique lipids, respectively. Overall, the method provided either higher coverage of the lipidome, greater measurement sensitivity, or both, when compared to other approaches of global, untargeted lipid profiling based on chromatography coupled with MS. PMID:22354571
Phase-I monitoring of standard deviations in multistage linear profiles
NASA Astrophysics Data System (ADS)
Kalaei, Mahdiyeh; Soleimani, Paria; Niaki, Seyed Taghi Akhavan; Atashgar, Karim
2018-03-01
In most modern manufacturing systems, products are often the output of some multistage processes. In these processes, the stages are dependent on each other, where the output quality of each stage depends also on the output quality of the previous stages. This property is called the cascade property. Although there are many studies in multistage process monitoring, there are fewer works on profile monitoring in multistage processes, especially on the variability monitoring of a multistage profile in Phase-I for which no research is found in the literature. In this paper, a new methodology is proposed to monitor the standard deviation involved in a simple linear profile designed in Phase I to monitor multistage processes with the cascade property. To this aim, an autoregressive correlation model between the stages is considered first. Then, the effect of the cascade property on the performances of three types of T 2 control charts in Phase I with shifts in standard deviation is investigated. As we show that this effect is significant, a U statistic is next used to remove the cascade effect, based on which the investigated control charts are modified. Simulation studies reveal good performances of the modified control charts.
A minimally-resolved immersed boundary model for reaction-diffusion problems
NASA Astrophysics Data System (ADS)
Pal Singh Bhalla, Amneet; Griffith, Boyce E.; Patankar, Neelesh A.; Donev, Aleksandar
2013-12-01
We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blob model can provide an accurate representation at low to moderate packing densities of the reactive particles, at a cost not much larger than solving a Poisson equation in the same domain. Unlike multipole expansion methods, our method does not require analytically computed Green's functions, but rather, computes regularized discrete Green's functions on the fly by using a standard grid-based discretization of the Poisson equation. This allows for great flexibility in implementing different boundary conditions, coupling to fluid flow or thermal transport, and the inclusion of other effects such as temporal evolution and even nonlinearities. We develop multigrid-based preconditioners for solving the linear systems that arise when using implicit temporal discretizations or studying steady states. In the diffusion-limited case the resulting linear system is a saddle-point problem, the efficient solution of which remains a challenge for suspensions of many particles. We validate our method by comparing to published results on reaction-diffusion in ordered and disordered suspensions of reactive spheres.
Multiple Equilibria and Endogenous Cycles in a Non-Linear Harrodian Growth Model
NASA Astrophysics Data System (ADS)
Commendatore, Pasquale; Michetti, Elisabetta; Pinto, Antonio
The standard result of Harrod's growth model is that, because investors react more strongly than savers to a change in income, the long run equilibrium of the economy is unstable. We re-interpret the Harrodian instability puzzle as a local instability problem and integrate his model with a nonlinear investment function. Multiple equilibria and different types of complex behaviour emerge. Moreover, even in the presence of locally unstable equilibria, for a large set of initial conditions the time path of the economy is not diverging, providing a solution to the instability puzzle.
Population pharmacokinetics model of THC used by pulmonary route in occasional cannabis smokers.
Marsot, A; Audebert, C; Attolini, L; Lacarelle, B; Micallef, J; Blin, O
Cannabis is the most widely used illegal drug in the world. Delta-9-tetrahydrocannabinol (THC) is the main source of the pharmacological effect. Some studies have been carried out and showed significant variability in the described models as the values of the estimated pharmacokinetic parameters. The objective of this study was to develop a population pharmacokinetic model for THC in occasional cannabis smokers. Twelve male volunteers (age: 20-28years, body weight: 62.5-91.0kg), tobacco (3-8 cigarette per day) and cannabis occasional smokers were recruited from the local community. After ad libitum smoking cannabis cigarette according a standardized procedure, 16 blood samples up to 72h were collected. Population pharmacokinetic analysis was performed using a non-linear mixed effects model, with NONMEM software. Demographic and biological data were investigated as covariates. A three-compartment model with first-order elimination fitted the data. The model was parameterized in terms of micro constants and central volume of distribution (V 1 ). Normal ALT concentration (6.0 to 45.0IU/l) demonstrated a statistically significant correlation with k 10 . The mean values (%Relative Standard Error (RSE)) for k 10 , k 12 , k 21 , k 23 , k 32 and V 1 were 0.408h -1 (48.8%), 4.070h -1 (21.4%), 0.022h -1 (27.0%), 1.070h -1 (14.3%), 1.060h -1 (16.7%) and 19.10L (39.7%), respectively. We have developed a population pharmacokinetic model able to describe the quantitative relationship between administration of inhaled doses of THC and the observed plasma concentrations after smoking cannabis. In addition, a linear relationship between ALT concentration and value of k 10 has been described and request further investigation. Copyright © 2017 Elsevier Inc. All rights reserved.
LAMPAT and LAMPATNL User’s Manual
2012-09-01
nonlinearity. These tools are implemented as subroutines in the finite element software ABAQUS . This user’s manual provides information on the proper...model either through the General tab of the Edit Job dialog box in Abaqus /CAE or the command line with user=( subroutine filename). Table 1...Selection of software product and subroutine . Static Analysis With Abaqus /Standard Dynamic Analysis With Abaqus /Explicit Linear, uncoupled
Assessment of a novel biomechanical fracture model for distal radius fractures
2012-01-01
Background Distal radius fractures (DRF) are one of the most common fractures and often need surgical treatment, which has been validated through biomechanical tests. Currently a number of different fracture models are used, none of which resemble the in vivo fracture location. The aim of the study was to develop a new standardized fracture model for DRF (AO-23.A3) and compare its biomechanical behavior to the current gold standard. Methods Variable angle locking volar plates (ADAPTIVE, Medartis) were mounted on 10 pairs of fresh-frozen radii. The osteotomy location was alternated within each pair (New: 10 mm wedge 8 mm / 12 mm proximal to the dorsal / volar apex of the articular surface; Gold standard: 10 mm wedge 20 mm proximal to the articular surface). Each specimen was tested in cyclic axial compression (increasing load by 100 N per cycle) until failure or −3 mm displacement. Parameters assessed were stiffness, displacement and dissipated work calculated for each cycle and ultimate load. Significance was tested using a linear mixed model and Wald test as well as t-tests. Results 7 female and 3 male pairs of radii aged 74 ± 9 years were tested. In most cases (7/10), the two groups showed similar mechanical behavior at low loads with increasing differences at increasing loads. Overall the novel fracture model showed a significant different biomechanical behavior than the gold standard model (p < 0,001). The average final loads resisted were significantly lower in the novel model (860 N ± 232 N vs. 1250 N ± 341 N; p = 0.001). Conclusion The novel biomechanical fracture model for DRF more closely mimics the in vivo fracture site and shows a significantly different biomechanical behavior with increasing loads when compared to the current gold standard. PMID:23244634
NASA Technical Reports Server (NTRS)
Folta, David; Bauer, Frank H. (Technical Monitor)
2001-01-01
The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.
Permafrost Hazards and Linear Infrastructure
NASA Astrophysics Data System (ADS)
Stanilovskaya, Julia; Sergeev, Dmitry
2014-05-01
The international experience of linear infrastructure planning, construction and exploitation in permafrost zone is being directly tied to the permafrost hazard assessment. That procedure should also consider the factors of climate impact and infrastructure protection. The current global climate change hotspots are currently polar and mountain areas. Temperature rise, precipitation and land ice conditions change, early springs occur more often. The big linear infrastructure objects cross the territories with different permafrost conditions which are sensitive to the changes in air temperature, hydrology, and snow accumulation which are connected to climatic dynamics. One of the most extensive linear structures built on permafrost worldwide are Trans Alaskan Pipeline (USA), Alaska Highway (Canada), Qinghai-Xizang Railway (China) and Eastern Siberia - Pacific Ocean Oil Pipeline (Russia). Those are currently being influenced by the regional climate change and permafrost impact which may act differently from place to place. Thermokarst is deemed to be the most dangerous process for linear engineering structures. Its formation and development depend on the linear structure type: road or pipeline, elevated or buried one. Zonal climate and geocryological conditions are also of the determining importance here. All the projects are of the different age and some of them were implemented under different climatic conditions. The effects of permafrost thawing have been recorded every year since then. The exploration and transportation companies from different countries maintain the linear infrastructure from permafrost degradation in different ways. The highways in Alaska are in a good condition due to governmental expenses on annual reconstructions. The Chara-China Railroad in Russia is under non-standard condition due to intensive permafrost response. Standards for engineering and construction should be reviewed and updated to account for permafrost hazards caused by the climate change. Extra maintenance activity is needed for existence infrastructure to stay operable. Engineers should run climate models under the most pessimistic scenarios when planning new infrastructure projects. That would allow reducing the potential shortcomings related to the permafrost thawing.
Seafloor Topography Estimation from Gravity Gradient Using Simulated Annealing
NASA Astrophysics Data System (ADS)
Yang, J.; Jekeli, C.; Liu, L.
2017-12-01
Inferring seafloor topography from gravimetry is an indirect yet proven and efficient means to map the ocean floor. Standard techniques rely on an approximate, linear relationship (Parker's formula) between topography and gravity. It has been reported that in the very rugged areas the discrepancies between prediction and ship soundings are very large, partly because the linear term of Parker's infinite series is dominant only in areas where the local topography is small compared with the regional topography. The validity of the linear approximation is therefore in need of analysis. In this study the nonlinear effects caused by terrain are quantified by both numerical tests and an algorithmic approach called coherency. It is shown that the nonlinear effects are more significant at higher frequencies, which suggests that estimation algorithms with nonlinear approximation in the modeled relationship between gravity gradient and topography should be developed in preparation for future high-resolution gravity gradient missions. The simulated annealing (SA) method is such an optimization technique that can process nonlinear inverse problems, and is used to estimate the seafloor topography parameters in a forward model by minimizing the difference between the observed and forward-computed vertical gravity gradients. Careful treatments like choosing suitable truncation distance, padding the vicinity of the study area with a known topography model, and using the relative cost function, are considered to improve the estimation accuracy. This study uses the gravity gradient, which is more sensitive to topography at short wavelengths than gravity anomaly. The gravity gradient data are derived from satellite altimetry, but the SA has no restrictions on data distribution, as required in Parker's infinite series model, thus enabling the use of airborne gravity gradient data, whose survey trajectories are irregular. The SA method is tested in an area of Guyots (E 156°-158° in longitude, N 20°-22° in latitude). Comparison between the estimation and ship sounding shows that half of the discrepancy is within 110 m, which improves the result from standard techniques by 32%.
NASA Technical Reports Server (NTRS)
Gernhardt, Michael I.; Abercromby, Andrew; Conklin, Johnny
2007-01-01
Conventional saturation decompression protocols use linear decompression rates that become progressively slower at shallower depths, consistent with free gas phase control vs. dissolved gas elimination kinetics. If decompression is limited by control of free gas phase, linear decompression is an inefficient strategy. The NASA prebreathe reduction program demonstrated that exercise during O2 prebreathe resulted in a 50% reduction (2 h vs. 4 h) in the saturation decompression time from 14.7 to 4.3 psi and a significant reduction in decompression sickness (DCS: 0 vs. 23.7%). Combining exercise with intermittent recompression, which controls gas phase growth and eliminates supersaturation before exercising, may enable more efficient saturation decompression schedules. A tissue bubble dynamics model (TBDM) was used in conjunction with a NASA exercise prebreathe model (NEPM) that relates tissue inert gas exchange rate constants to exercise (ml O2/kg-min), to develop a schedule for decompression from helium saturation at 400 fsw. The models provide significant prediction (p < 0.001) and goodness of fit with 430 cases of DCS in 6437 laboratory dives for TBDM (p = 0.77) and with 22 cases of DCS in 159 altitude exposures for NEPM (p = 0.70). The models have also been used operationally in over 25,000 dives (TBDM) and 40 spacewalks (NEPM). The standard U.S. Navy (USN) linear saturation decompression schedule from saturation at 400 fsw required 114.5 h with a maximum Bubble Growth Index (BGI(sub max)) of 17.5. Decompression using intermittent recompression combined with two 10 min exercise periods (75% VO2 (sub peak)) per day required 54.25 h (BGI(sub max): 14.7). Combined intermittent recompression and exercise resulted in a theoretical 53% (2.5 day) reduction in decompression time and theoretically lower DCS risk compared to the standard USN decompression schedule. These results warrant future decompression trials to evaluate the efficacy of this approach.
Functionalized anatomical models for EM-neuron Interaction modeling
NASA Astrophysics Data System (ADS)
Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang
2016-06-01
The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.
Higher-order Multivariable Polynomial Regression to Estimate Human Affective States
NASA Astrophysics Data System (ADS)
Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin
2016-03-01
From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.
Higher-order Multivariable Polynomial Regression to Estimate Human Affective States
Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin
2016-01-01
From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254
SRS modeling in high power CW fiber lasers for component optimization
NASA Astrophysics Data System (ADS)
Brochu, G.; Villeneuve, A.; Faucher, M.; Morin, M.; Trépanier, F.; Dionne, R.
2017-02-01
A CW kilowatt fiber laser numerical model has been developed taking into account intracavity stimulated Raman scattering (SRS). It uses the split-step Fourier method which is applied iteratively over several cavity round trips. The gain distribution is re-evaluated after each iteration with a standard CW model using an effective FBG reflectivity that quantifies the non-linear spectral leakage. This model explains why spectrally narrow output couplers produce more SRS than wider FBGs, as recently reported by other authors, and constitute a powerful tool to design optimized and innovative fiber components to push back the onset of SRS for a given fiber core diameter.
EEG and MEG data analysis in SPM8.
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.
EEG and MEG Data Analysis in SPM8
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221
Three estimates of the association between linear growth failure and cognitive ability.
Cheung, Y B; Lam, K F
2009-09-01
To compare three estimators of association between growth stunting as measured by height-for-age Z-score and cognitive ability in children, and to examine the extent statistical adjustment for covariates is useful for removing confounding due to socio-economic status. Three estimators, namely random-effects, within- and between-cluster estimators, for panel data were used to estimate the association in a survey of 1105 pairs of siblings who were assessed for anthropometry and cognition. Furthermore, a 'combined' model was formulated to simultaneously provide the within- and between-cluster estimates. Random-effects and between-cluster estimators showed strong association between linear growth and cognitive ability, even after adjustment for a range of socio-economic variables. In contrast, the within-cluster estimator showed a much more modest association: For every increase of one Z-score in linear growth, cognitive ability increased by about 0.08 standard deviation (P < 0.001). The combined model verified that the between-cluster estimate was significantly larger than the within-cluster estimate (P = 0.004). Residual confounding by socio-economic situations may explain a substantial proportion of the observed association between linear growth and cognition in studies that attempt to control the confounding by means of multivariable regression analysis. The within-cluster estimator provides more convincing and modest results about the strength of association.
Garcia-Hernandez, Alberto
2014-03-01
Although the quality-adjusted life-years (QALY) model is standard in health technology assessment, quantitative methods are less frequent but increasingly used for benefit-risk assessment (BRA) at earlier stages of drug development. A frequent challenge when implementing metrics for BRA is to weigh the importance of effects on a chronic condition against the risk of severe events during the trial. The lifetime component of the QALY model has a counterpart in the BRA context, namely, the risk of dying during the study. A new concept is presented, the hazard of death function that a subject is willing to accept instead of the baseline hazard to improve his or her chronic health status, which we have called the quality-of-life-adjusted hazard of death. It has been proven that if assumptions of the linear QALY model hold, the excess mortality rate tolerated by a subject for a chronic health improvement is inversely proportional to the mean residual life. This result leads to a new representation of the linear QALY model in terms of hazard rate functions and allows utilities obtained by using standard methods involving trade-offs of life duration to be translated into thresholds of tolerated mortality risk during a short period of time, thereby avoiding direct trade-offs using small probabilities of events during the study, which is known to lead to bias and variability. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Cold dark energy constraints from the abundance of galaxy clusters
Heneka, Caroline; Rapetti, David; Cataneo, Matteo; ...
2017-10-05
We constrain cold dark energy of negligible sound speed using galaxy cluster abundance observations. In contrast to standard quasi-homogeneous dark energy, negligible sound speed implies clustering of the dark energy fluid at all scales, allowing us to measure the effects of dark energy perturbations at cluster scales. We compare those models and set the stage for using non-linear information from semi-analytical modelling in cluster growth data analyses. For this, we recalibrate the halo mass function with non-linear characteristic quantities, the spherical collapse threshold and virial overdensity, that account for model and redshift-dependent behaviours, as well as an additional mass contributionmore » for cold dark energy. Here in this paper, we present the first constraints from this cold dark matter plus cold dark energy mass function using our cluster abundance likelihood, which self-consistently accounts for selection effects, covariances and systematic uncertainties. We combine cluster growth data with cosmic microwave background, supernovae Ia and baryon acoustic oscillation data, and find a shift between cold versus quasi-homogeneous dark energy of up to 1σ. We make a Fisher matrix forecast of constraints attainable with cluster growth data from the ongoing Dark Energy Survey (DES). For DES, we predict ~ 50 percent tighter constraints on (Ωm, w) for cold dark energy versus wCDM models, with the same free parameters. Overall, we show that cluster abundance analyses are sensitive to cold dark energy, an alternative, viable model that should be routinely investigated alongside the standard dark energy scenario.« less
Cold dark energy constraints from the abundance of galaxy clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heneka, Caroline; Rapetti, David; Cataneo, Matteo
We constrain cold dark energy of negligible sound speed using galaxy cluster abundance observations. In contrast to standard quasi-homogeneous dark energy, negligible sound speed implies clustering of the dark energy fluid at all scales, allowing us to measure the effects of dark energy perturbations at cluster scales. We compare those models and set the stage for using non-linear information from semi-analytical modelling in cluster growth data analyses. For this, we recalibrate the halo mass function with non-linear characteristic quantities, the spherical collapse threshold and virial overdensity, that account for model and redshift-dependent behaviours, as well as an additional mass contributionmore » for cold dark energy. Here in this paper, we present the first constraints from this cold dark matter plus cold dark energy mass function using our cluster abundance likelihood, which self-consistently accounts for selection effects, covariances and systematic uncertainties. We combine cluster growth data with cosmic microwave background, supernovae Ia and baryon acoustic oscillation data, and find a shift between cold versus quasi-homogeneous dark energy of up to 1σ. We make a Fisher matrix forecast of constraints attainable with cluster growth data from the ongoing Dark Energy Survey (DES). For DES, we predict ~ 50 percent tighter constraints on (Ωm, w) for cold dark energy versus wCDM models, with the same free parameters. Overall, we show that cluster abundance analyses are sensitive to cold dark energy, an alternative, viable model that should be routinely investigated alongside the standard dark energy scenario.« less
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Van Looy, Stijn; Verplancke, Thierry; Benoit, Dominique; Hoste, Eric; Van Maele, Georges; De Turck, Filip; Decruyenaere, Johan
2007-01-01
Tacrolimus is an important immunosuppressive drug for organ transplantation patients. It has a narrow therapeutic range, toxic side effects, and a blood concentration with wide intra- and interindividual variability. Hence, it is of the utmost importance to monitor tacrolimus blood concentration, thereby ensuring clinical effect and avoiding toxic side effects. Prediction models for tacrolimus blood concentration can improve clinical care by optimizing monitoring of these concentrations, especially in the initial phase after transplantation during intensive care unit (ICU) stay. This is the first study in the ICU in which support vector machines, as a new data modeling technique, are investigated and tested in their prediction capabilities of tacrolimus blood concentration. Linear support vector regression (SVR) and nonlinear radial basis function (RBF) SVR are compared with multiple linear regression (MLR). Tacrolimus blood concentrations, together with 35 other relevant variables from 50 liver transplantation patients, were extracted from our ICU database. This resulted in a dataset of 457 blood samples, on average between 9 and 10 samples per patient, finally resulting in a database of more than 16,000 data values. Nonlinear RBF SVR, linear SVR, and MLR were performed after selection of clinically relevant input variables and model parameters. Differences between observed and predicted tacrolimus blood concentrations were calculated. Prediction accuracy of the three methods was compared after fivefold cross-validation (Friedman test and Wilcoxon signed rank analysis). Linear SVR and nonlinear RBF SVR had mean absolute differences between observed and predicted tacrolimus blood concentrations of 2.31 ng/ml (standard deviation [SD] 2.47) and 2.38 ng/ml (SD 2.49), respectively. MLR had a mean absolute difference of 2.73 ng/ml (SD 3.79). The difference between linear SVR and MLR was statistically significant (p < 0.001). RBF SVR had the advantage of requiring only 2 input variables to perform this prediction in comparison to 15 and 16 variables needed by linear SVR and MLR, respectively. This is an indication of the superior prediction capability of nonlinear SVR. Prediction of tacrolimus blood concentration with linear and nonlinear SVR was excellent, and accuracy was superior in comparison with an MLR model.
Citygml and the Streets of New York - a Proposal for Detailed Street Space Modelling
NASA Astrophysics Data System (ADS)
Beil, C.; Kolbe, T. H.
2017-10-01
Three-dimensional semantic city models are increasingly used for the analysis of large urban areas. Until now the focus has mostly been on buildings. Nonetheless many applications could also benefit from detailed models of public street space for further analysis. However, there are only few guidelines for representing roads within city models. Therefore, related standards dealing with street modelling are examined and discussed. Nearly all street representations are based on linear abstractions. However, there are many use cases that require or would benefit from the detailed geometrical and semantic representation of street space. A variety of potential applications for detailed street space models are presented. Subsequently, based on related standards as well as on user requirements, a concept for a CityGML-compliant representation of street space in multiple levels of detail is developed. In the course of this process, the CityGML Transportation model of the currently valid OGC standard CityGML2.0 is examined to discover possibilities for further developments. Moreover, a number of improvements are presented. Finally, based on open data sources, the proposed concept is implemented within a semantic 3D city model of New York City generating a detailed 3D street space model for the entire city. As a result, 11 thematic classes, such as roadbeds, sidewalks or traffic islands are generated and enriched with a large number of thematic attributes.
Linard, Joshua I.
2013-01-01
Mitigating the effects of salt and selenium on water quality in the Grand Valley and lower Gunnison River Basin in western Colorado is a major concern for land managers. Previous modeling indicated means to improve the models by including more detailed geospatial data and a more rigorous method for developing the models. After evaluating all possible combinations of geospatial variables, four multiple linear regression models resulted that could estimate irrigation-season salt yield, nonirrigation-season salt yield, irrigation-season selenium yield, and nonirrigation-season selenium yield. The adjusted r-squared and the residual standard error (in units of log-transformed yield) of the models were, respectively, 0.87 and 2.03 for the irrigation-season salt model, 0.90 and 1.25 for the nonirrigation-season salt model, 0.85 and 2.94 for the irrigation-season selenium model, and 0.93 and 1.75 for the nonirrigation-season selenium model. The four models were used to estimate yields and loads from contributing areas corresponding to 12-digit hydrologic unit codes in the lower Gunnison River Basin study area. Each of the 175 contributing areas was ranked according to its estimated mean seasonal yield of salt and selenium.
Adjusted variable plots for Cox's proportional hazards regression model.
Hall, C B; Zeger, S L; Bandeen-Roche, K J
1996-01-01
Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.
CP-violating top quark couplings at future linear e^+e^- colliders
NASA Astrophysics Data System (ADS)
Bernreuther, W.; Chen, L.; García, I.; Perelló, M.; Poeschl, R.; Richard, F.; Ros, E.; Vos, M.
2018-02-01
We study the potential of future lepton colliders to probe violation of the CP symmetry in the top quark sector. In certain extensions of the Standard Model, such as the two-Higgs-doublet model (2HDM), sizeable anomalous top quark dipole moments can arise, which may be revealed by a precise measurement of top quark pair production. We present results from detailed Monte Carlo studies for the ILC at 500 GeV and CLIC at 380 GeV and use parton-level simulations to explore the potential of high-energy operation. We find that precise measurements in e^+e^- → t\\bar{t} production with subsequent decay to lepton plus jets final states can provide sufficient sensitivity to detect Higgs-boson-induced CP violation in a viable two-Higgs-doublet model. The potential of a linear e^+e^- collider to detect CP-violating electric and weak dipole form factors of the top quark exceeds the prospects of the HL-LHC by over an order of magnitude.
Advantages and pitfalls in the application of mixed-model association methods.
Yang, Jian; Zaitlen, Noah A; Goddard, Michael E; Visscher, Peter M; Price, Alkes L
2014-02-01
Mixed linear models are emerging as a method of choice for conducting genetic association studies in humans and other organisms. The advantages of the mixed-linear-model association (MLMA) method include the prevention of false positive associations due to population or relatedness structure and an increase in power obtained through the application of a correction that is specific to this structure. An underappreciated point is that MLMA can also increase power in studies without sample structure by implicitly conditioning on associated loci other than the candidate locus. Numerous variations on the standard MLMA approach have recently been published, with a focus on reducing computational cost. These advances provide researchers applying MLMA methods with many options to choose from, but we caution that MLMA methods are still subject to potential pitfalls. Here we describe and quantify the advantages and pitfalls of MLMA methods as a function of study design and provide recommendations for the application of these methods in practical settings.
CMB and matter power spectra with non-linear dark-sector interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marttens, R.F. vom; Casarini, L.; Zimdahl, W.
2017-01-01
An interaction between dark matter and dark energy, proportional to the product of their energy densities, results in a scaling behavior of the ratio of these densities with respect to the scale factor of the Robertson-Walker metric. This gives rise to a class of cosmological models which deviate from the standard model in an analytically tractable way. In particular, it becomes possible to quantify the role of potential dark-energy perturbations. We investigate the impact of this interaction on the structure formation process. Using the (modified) CAMB code we obtain the CMB spectrum as well as the linear matter power spectrum.more » It is shown that the strong degeneracy in the parameter space present in the background analysis is considerably reduced by considering Planck data. Our analysis is compatible with the ΛCDM model at the 2σ confidence level with a slightly preferred direction of the energy flow from dark matter to dark energy.« less
Paul, Sarbajit; Chang, Junghwan
2017-01-01
This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension. PMID:28671580
Paul, Sarbajit; Chang, Junghwan
2017-07-01
This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension.
New methods and results for quantification of lightning-aircraft electrodynamics
NASA Technical Reports Server (NTRS)
Pitts, Felix L.; Lee, Larry D.; Perala, Rodney A.; Rudolph, Terence H.
1987-01-01
The NASA F-106 collected data on the rates of change of electromagnetic parameters on the aircraft surface during over 700 direct lightning strikes while penetrating thunderstorms at altitudes from 15,000 t0 40,000 ft (4,570 to 12,190 m). These in situ measurements provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircraft appropriate for determining indirect lightning effects on aircraft. These data are used to update previous lightning criteria and standards developed over the years from ground-based measurements. The proposed standards will be the first which reflect actual aircraft responses measured at flight altitudes. Nonparametric maximum likelihood estimates of the distribution of the peak electromagnetic rates of change for consideration in the new standards are obtained based on peak recorder data for multiple-strike flights. The linear and nonlinear modeling techniques developed provide means to interpret and understand the direct-strike electromagnetic data acquired on the F-106. The reasonable results obtained with the models, compared with measured responses, provide increased confidence that the models may be credibly applied to other aircraft.
Drell-Yan process as an avenue to test a noncommutative standard model at the Large Hadron Collider
NASA Astrophysics Data System (ADS)
J, Selvaganapathy; Das, Prasanta Kumar; Konar, Partha
2016-06-01
We study the Drell-Yan process at the Large Hadron Collider in the presence of the noncommutative extension of the standard model. Using the Seiberg-Witten map, we calculate the production cross section to first order in the noncommutative parameter Θμ ν . Although this idea has been evolving for a long time, only a limited amount of phenomenological analysis has been completed, and this was mostly in the context of the linear collider. An outstanding feature from this nonminimal noncommutative standard model not only modifies the couplings over the SM production channel but also allows additional nonstandard vertices which can play a significant role. Hence, in the Drell-Yan process, as studied in the present analysis, one also needs to account for the gluon fusion process at the tree level. Some of the characteristic signatures, such as oscillatory azimuthal distributions, are an outcome of the momentum-dependent effective couplings. We explore the noncommutative scale ΛNC≥0.4 TeV , considering different machine energy ranging from 7 to 13 TeV.
Xiao, Hui; Sun, Ke; Sun, Ye; Wei, Kangli; Tu, Kang; Pan, Leiqing
2017-11-22
Near-infrared (NIR) spectroscopy was applied for the determination of total soluble solid contents (SSC) of single Ruby Seedless grape berries using both benchtop Fourier transform (VECTOR 22/N) and portable grating scanning (SupNIR-1500) spectrometers in this study. The results showed that the best SSC prediction was obtained by VECTOR 22/N in the range of 12,000 to 4000 cm -1 (833-2500 nm) for Ruby Seedless with determination coefficient of prediction (R p ²) of 0.918, root mean squares error of prediction (RMSEP) of 0.758% based on least squares support vector machine (LS-SVM). Calibration transfer was conducted on the same spectral range of two instruments (1000-1800 nm) based on the LS-SVM model. By conducting Kennard-Stone (KS) to divide sample sets, selecting the optimal number of standardization samples and applying Passing-Bablok regression to choose the optimal instrument as the master instrument, a modified calibration transfer method between two spectrometers was developed. When 45 samples were selected for the standardization set, the linear interpolation-piecewise direct standardization (linear interpolation-PDS) performed well for calibration transfer with R p ² of 0.857 and RMSEP of 1.099% in the spectral region of 1000-1800 nm. And it was proved that re-calculating the standardization samples into master model could improve the performance of calibration transfer in this study. This work indicated that NIR could be used as a rapid and non-destructive method for SSC prediction, and provided a feasibility to solve the transfer difficulty between totally different NIR spectrometers.
An Occupational Performance Test Validation Program for Fire Fighters at the Kennedy Space Center
NASA Technical Reports Server (NTRS)
Schonfeld, Brian R.; Doerr, Donald F.; Convertino, Victor A.
1990-01-01
We evaluated performance of a modified Combat Task Test (CTT) and of standard fitness tests in 20 male subjects to assess the prediction of occupational performance standards for Kennedy Space Center fire fighters. The CTT consisted of stair-climbing, a chopping simulation, and a victim rescue simulation. Average CTT performance time was 3.61 +/- 0.25 min (SEM) and all CTT tasks required 93% to 97% maximal heart rate. By using scores from the standard fitness tests, a multiple linear regression model was fitted to each parameter: the stairclimb (r(exp 2) = .905, P less than .05), the chopping performance time (r(exp 2) = .582, P less than .05), the victim rescue time (r(exp 2) = .218, P = not significant), and the total performance time (r(exp 2) = .769, P less than .05). Treadmill time was the predominant variable, being the major predictor in two of four models. These results indicated that standardized fitness tests can predict performance on some CTT tasks and that test predictors were amenable to exercise training.
Stable orthogonal local discriminant embedding for linear dimensionality reduction.
Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin
2013-07-01
Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Gao, H
2016-06-15
Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Pinto, Luciano Moreira; Costa, Elaine Fiod; Melo, Luiz Alberto S; Gross, Paula Blasco; Sato, Eduardo Toshio; Almeida, Andrea Pereira; Maia, Andre; Paranhos, Augusto
2014-04-10
We examined the structure-function relationship between two perimetric tests, the frequency doubling technology (FDT) matrix and standard automated perimetry (SAP), and two optical coherence tomography (OCT) devices (time-domain and spectral-domain). This cross-sectional study included 97 eyes from 29 healthy individuals, and 68 individuals with early, moderate, or advanced primary open-angle glaucoma. The correlations between overall and sectorial parameters of retinal nerve fiber layer thickness (RNFL) measured with Stratus and Spectralis OCT, and the visual field sensitivity obtained with FDT matrix and SAP were assessed. The relationship also was evaluated using a previously described linear model. The correlation coefficients for the threshold sensitivity measured with SAP and Stratus OCT ranged from 0.44 to 0.79, and those for Spectralis OCT ranged from 0.30 to 0.75. Regarding FDT matrix, the correlation ranged from 0.40 to 0.79 with Stratus OCT and from 0.39 to 0.79 with Spectralis OCT. Stronger correlations were found in the overall measurements and the arcuate sectors for both visual fields and OCT devices. A linear relationship was observed between FDT matrix sensitivity and the OCT devices. The previously described linear model fit the data from SAP and the OCT devices well, particularly in the inferotemporal sector. The FDT matrix and SAP visual sensitivities were related strongly to the RNFL thickness measured with the Stratus and Spectralis OCT devices, particularly in the overall and arcuate sectors. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Feingold, Alan
2009-01-01
The use of growth-modeling analysis (GMA)--including Hierarchical Linear Models, Latent Growth Models, and General Estimating Equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the intervention and control groups that captures the treatment effect is rarely reported. This article first reviews two classes of formulas for effect sizes associated with classical repeated-measures designs that use the standard deviation of either change scores or raw scores for the denominator. It then broadens the scope to subsume GMA, and demonstrates that the independent groups, within-subjects, pretest-posttest control-group, and GMA designs all estimate the same effect size when the standard deviation of raw scores is uniformly used. Finally, it is shown that the correct effect size for treatment efficacy in GMA--the difference between the estimated means of the two groups at end of study (determined from the coefficient for the slope difference and length of study) divided by the baseline standard deviation--is not reported in clinical trials. PMID:19271847
Stability analysis of spacecraft power systems
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.; Sheble, G. B.; Nelms, R. M.
1990-01-01
The problems in applying standard electric utility models, analyses, and algorithms to the study of the stability of spacecraft power conditioning and distribution systems are discussed. Both single-phase and three-phase systems are considered. Of particular concern are the load and generator models that are used in terrestrial power system studies, as well as the standard assumptions of load and topological balance that lead to the use of the positive sequence network. The standard assumptions regarding relative speeds of subsystem dynamic responses that are made in the classical transient stability algorithm, which forms the backbone of utility-based studies, are examined. The applicability of these assumptions to a spacecraft power system stability study is discussed in detail. In addition to the classical indirect method, the applicability of Liapunov's direct methods to the stability determination of spacecraft power systems is discussed. It is pointed out that while the proposed method uses a solution process similar to the classical algorithm, the models used for the sources, loads, and networks are, in general, more accurate. Some preliminary results are given for a linear-graph, state-variable-based modeling approach to the study of the stability of space-based power distribution networks.
Standard electrode potential, Tafel equation, and the solvation thermodynamics.
Matyushov, Dmitry V
2009-06-21
Equilibrium in the electronic subsystem across the solution-metal interface is considered to connect the standard electrode potential to the statistics of localized electronic states in solution. We argue that a correct derivation of the Nernst equation for the electrode potential requires a careful separation of the relevant time scales. An equation for the standard metal potential is derived linking it to the thermodynamics of solvation. The Anderson-Newns model for electronic delocalization between the solution and the electrode is combined with a bilinear model of solute-solvent coupling introducing nonlinear solvation into the theory of heterogeneous electron transfer. We therefore are capable of addressing the question of how nonlinear solvation affects electrochemical observables. The transfer coefficient of electrode kinetics is shown to be equal to the derivative of the free energy, or generalized force, required to shift the unoccupied electronic level in the bulk. The transfer coefficient thus directly quantifies the extent of nonlinear solvation of the redox couple. The current model allows the transfer coefficient to deviate from the value of 0.5 of the linear solvation models at zero electrode overpotential. The electrode current curves become asymmetric in respect to the change in the sign of the electrode overpotential.
Predicting birth weight with conditionally linear transformation models.
Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten
2016-12-01
Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Vadivel, P.; Sakthivel, R.; Mathiyalagan, K.; Thangaraj, P.
2013-02-01
This paper addresses the problem of passivity analysis issue for a class of fuzzy bidirectional associative memory (BAM) neural networks with Markovian jumping parameters and time varying delays. A set of sufficient conditions for the passiveness of the considered fuzzy BAM neural network model is derived in terms of linear matrix inequalities by using the delay fractioning technique together with the Lyapunov function approach. In addition, the uncertainties are inevitable in neural networks because of the existence of modeling errors and external disturbance. Further, this result is extended to study the robust passivity criteria for uncertain fuzzy BAM neural networks with time varying delays and uncertainties. These criteria are expressed in the form of linear matrix inequalities (LMIs), which can be efficiently solved via standard numerical software. Two numerical examples are provided to demonstrate the effectiveness of the obtained results.
Krasikova, Dina V; Le, Huy; Bachura, Eric
2018-06-01
To address a long-standing concern regarding a gap between organizational science and practice, scholars called for more intuitive and meaningful ways of communicating research results to users of academic research. In this article, we develop a common language effect size index (CLβ) that can help translate research results to practice. We demonstrate how CLβ can be computed and used to interpret the effects of continuous and categorical predictors in multiple linear regression models. We also elaborate on how the proposed CLβ index is computed and used to interpret interactions and nonlinear effects in regression models. In addition, we test the robustness of the proposed index to violations of normality and provide means for computing standard errors and constructing confidence intervals around its estimates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The Mach number of the cosmic flow - A critical test for current theories
NASA Technical Reports Server (NTRS)
Ostriker, Jeremiah P.; Suto, Yusushi
1990-01-01
A new cosmological, self-contained test using the ratio of mean velocity and the velocity dispersion in the mean flow frame of a group of test objects is presented. To allow comparison with linear theory, the velocity field must first be smoothed on a suitable scale. In the context of linear perturbation theory, the Mach number M(R) which measures the ratio of power on scales larger than to scales smaller than the patch size R, is independent of the perturbation amplitude and also of bias. An apparent inconsistency is found for standard values of power-law index n = 1 and cosmological density parameter Omega = 1, when comparing values of M(R) predicted by popular models with tentative available observations. Nonstandard models based on adiabatic perturbations with either negative n or small Omega value also fail, due to creation of unacceptably large microwave background fluctuations.
Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling
Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.
2010-01-01
Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez-Ramirez, J.; Aguilar, R.; Lopez-Isunza, F.
FCC processes involve complex interactive dynamics which are difficult to operate and control as well as poorly known reaction kinetics. This work concerns the synthesis of temperature controllers for FCC units. The problem is addressed first for the case where perfect knowledge of the reaction kinetics is assumed, leading to an input-output linearizing state feedback. However, in most industrial FCC units, perfect knowledge of reaction kinetics and composition measurements is not available. To address the problem of robustness against uncertainties in the reaction kinetics, an adaptive model-based nonlinear controller with simplified reaction models is presented. The adaptive strategy makes usemore » of estimates of uncertainties derived from calorimetric (energy) balances. The resulting controller is similar in form to standard input-output linearizing controllers and can be tuned analogously. Alternatively, the controller can be tuned using a single gain parameter and is computationally efficient. The performance of the closed-loop system and the controller design procedure are shown with simulations.« less
NASA Astrophysics Data System (ADS)
Hahn, S.; Machefaux, E.; Hristov, Y. V.; Albano, M.; Threadgill, R.
2016-09-01
In the present study, combination of the standalone dynamic wake meandering (DWM) model with Reynolds-averaged Navier-Stokes (RANS) CFD solutions for ambient ABL flows is introduced, and its predictive performance for annual energy production (AEP) is evaluated against Vestas’ SCADA data for six operating wind farms over semi-complex terrains under neutral conditions. The performances of conventional linear and quadratic wake superposition techniques are also compared, together with the in-house implemention of successive hierarchical merging approaches. As compared to our standard procedure based on the Jensen model in WindPRO, the overall results are promising, leading to a significant improvement in AEP accuracy for four of the six sites. While the conventional linear superposition shows the best performance for the improved four sites, the hierarchical square superposition shows the least deteriorated result for the other two sites.
NASA Technical Reports Server (NTRS)
Pelletier, R. E.
1984-01-01
A need exists for digitized information pertaining to linear features such as roads, streams, water bodies and agricultural field boundaries as component parts of a data base. For many areas where this data may not yet exist or is in need of updating, these features may be extracted from remotely sensed digital data. This paper examines two approaches for identifying linear features, one utilizing raw data and the other classified data. Each approach uses a series of data enhancement procedures including derivation of standard deviation values, principal component analysis and filtering procedures using a high-pass window matrix. Just as certain bands better classify different land covers, so too do these bands exhibit high spectral contrast by which boundaries between land covers can be delineated. A few applications for this kind of data are briefly discussed, including its potential in a Universal Soil Loss Equation Model.
A standardization model based on image recognition for performance evaluation of an oral scanner.
Seo, Sang-Wan; Lee, Wan-Sun; Byun, Jae-Young; Lee, Kyu-Bok
2017-12-01
Accurate information is essential in dentistry. The image information of missing teeth is used in optically based medical equipment in prosthodontic treatment. To evaluate oral scanners, the standardized model was examined from cases of image recognition errors of linear discriminant analysis (LDA), and a model that combines the variables with reference to ISO 12836:2015 was designed. The basic model was fabricated by applying 4 factors to the tooth profile (chamfer, groove, curve, and square) and the bottom surface. Photo-type and video-type scanners were used to analyze 3D images after image capture. The scans were performed several times according to the prescribed sequence to distinguish the model from the one that did not form, and the results confirmed it to be the best. In the case of the initial basic model, a 3D shape could not be obtained by scanning even if several shots were taken. Subsequently, the recognition rate of the image was improved with every variable factor, and the difference depends on the tooth profile and the pattern of the floor surface. Based on the recognition error of the LDA, the recognition rate decreases when the model has a similar pattern. Therefore, to obtain the accurate 3D data, the difference of each class needs to be provided when developing a standardized model.
Wit, Jan M.; Himes, John H.; van Buuren, Stef; Denno, Donna M.; Suchdev, Parminder S.
2017-01-01
Background/Aims Childhood stunting is a prevalent problem in low- and middle-income countries and is associated with long-term adverse neurodevelopment and health outcomes. In this review, we define indicators of growth, discuss key challenges in their analysis and application, and offer suggestions for indicator selection in clinical research contexts. Methods Critical review of the literature. Results Linear growth is commonly expressed as length-for-age or height-for-age z-score (HAZ) in comparison to normative growth standards. Conditional HAZ corrects for regression to the mean where growth changes relate to previous status. In longitudinal studies, growth can be expressed as ΔHAZ at 2 time points. Multilevel modeling is preferable when more measurements per individual child are available over time. Height velocity z-score reference standards are available for children under the age of 2 years. Adjusting for covariates or confounders (e.g., birth weight, gestational age, sex, parental height, maternal education, socioeconomic status) is recommended in growth analyses. Conclusion The most suitable indicator(s) for linear growth can be selected based on the number of available measurements per child and the child's age. By following a step-by-step algorithm, growth analyses can be precisely and accurately performed to allow for improved comparability within and between studies. PMID:28196362
Visual Detection Under Uncertainty Operates Via an Early Static, Not Late Dynamic, Non-Linearity
Neri, Peter
2010-01-01
Signals in the environment are rarely specified exactly: our visual system may know what to look for (e.g., a specific face), but not its exact configuration (e.g., where in the room, or in what orientation). Uncertainty, and the ability to deal with it, is a fundamental aspect of visual processing. The MAX model is the current gold standard for describing how human vision handles uncertainty: of all possible configurations for the signal, the observer chooses the one corresponding to the template associated with the largest response. We propose an alternative model in which the MAX operation, which is a dynamic non-linearity (depends on multiple inputs from several stimulus locations) and happens after the input stimulus has been matched to the possible templates, is replaced by an early static non-linearity (depends only on one input corresponding to one stimulus location) which is applied before template matching. By exploiting an integrated set of analytical and experimental tools, we show that this model is able to account for a number of empirical observations otherwise unaccounted for by the MAX model, and is more robust with respect to the realistic limitations imposed by the available neural hardware. We then discuss how these results, currently restricted to a simple visual detection task, may extend to a wider range of problems in sensory processing. PMID:21212835
Khan, I.; Hawlader, Sophie Mohammad Delwer Hossain; Arifeen, Shams El; Moore, Sophie; Hills, Andrew P.; Wells, Jonathan C.; Persson, Lars-Åke; Kabir, Iqbal
2012-01-01
The aim of this study was to investigate the validity of the Tanita TBF 300A leg-to-leg bioimpedance analyzer for estimating fat-free mass (FFM) in Bangladeshi children aged 4-10 years and to develop novel prediction equations for use in this population, using deuterium dilution as the reference method. Two hundred Bangladeshi children were enrolled. The isotope dilution technique with deuterium oxide was used for estimation of total body water (TBW). FFM estimated by Tanita was compared with results of deuterium oxide dilution technique. Novel prediction equations were created for estimating FFM, using linear regression models, fitting child's height and impedance as predictors. There was a significant difference in FFM and percentage of body fat (BF%) between methods (p<0.01), Tanita underestimating TBW in boys (p=0.001) and underestimating BF% in girls (p<0.001). A basic linear regression model with height and impedance explained 83% of the variance in FFM estimated by deuterium oxide dilution technique. The best-fit equation to predict FFM from linear regression modelling was achieved by adding weight, sex, and age to the basic model, bringing the adjusted R2 to 89% (standard error=0.90, p<0.001). These data suggest Tanita analyzer may be a valid field-assessment technique in Bangladeshi children when using population-specific prediction equations, such as the ones developed here. PMID:23082630
Süntar, Ipek; Akkol, Esra Küpeli; Senol, Fatma Sezer; Keles, Hikmet; Orhan, Ilkay Erdogan
2011-04-26
Salvia L. species are widely used against wounds and skin infections in Turkish folk medicine. The aim of the present study is to evaluate wound healing activity of the ethanol (EtOH) extracts of Salvia cryptantha and Salvia cyanescens. For the assessment of wound healing activity linear incision and circular excision wound models were employed on rats and mice. The wound healing effect was comparatively evaluated with the standard skin ointment Madecassol(®). Inhibition of tyrosinase, a key enzyme in skin aging, was achieved using ELISA microplate reader. Antioxidant activity was evaluated by 2,2-diphenyl-1-picrylhydrazyl (DPPH) and superoxide radical scavenger effect, ferrous ion-chelating ability, and ferric-reducing antioxidant power (FRAP) tests. The EtOH extract of Salvia cryptantha treated groups of animals showed 56.5% contraction, whereas the reference drug Madecassol(®) showed 100% contraction. On the other hand, the same extract on linear incision wound model demonstrated a significant increase (33.2%) in wound tensile strength as compared to other groups. The results of histopathological examination maintained the upshot of linear incision and circular excision wound models as well. These findings specify that Salvia cryptantha for wound healing activity can be appealed further phytochemical estimation for spotting its active components. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Linear mixed model for heritability estimation that explicitly addresses environmental variation.
Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S
2016-07-05
The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Yagi, T; Ohshima, S; Funahashi, Y
1997-09-01
A linear analogue network model is proposed to describe the neuronal circuit of the outer retina consisting of cones, horizontal cells, and bipolar cells. The model reflects previous physiological findings on the spatial response properties of these neurons to dim illumination and is expressed by physiological mechanisms, i.e., membrane conductances, gap-junctional conductances, and strengths of chemical synaptic interactions. Using the model, we characterized the spatial filtering properties of the bipolar cell receptive field with the standard regularization theory, in which the early vision problems are attributed to minimization of a cost function. The cost function accompanying the present characterization is derived from the linear analogue network model, and one can gain intuitive insights on how physiological mechanisms contribute to the spatial filtering properties of the bipolar cell receptive field. We also elucidated a quantitative relation between the Laplacian of Gaussian operator and the bipolar cell receptive field. From the computational point of view, the dopaminergic modulation of the gap-junctional conductance between horizontal cells is inferred to be a suitable neural adaptation mechanism for transition between photopic and mesopic vision.
Progress Toward Improving Jet Noise Predictions in Hot Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Kenzakowski, Donald C.
2007-01-01
An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.
An in-situ Raman study on pristane at high pressure and ambient temperature
NASA Astrophysics Data System (ADS)
Wu, Jia; Ni, Zhiyong; Wang, Shixia; Zheng, Haifei
2018-01-01
The Csbnd H Raman spectroscopic band (2800-3000 cm-1) of pristane was measured in a diamond anvil cell at 1.1-1532 MPa and ambient temperature. Three models are used for the peak-fitting of this Csbnd H Raman band, and the linear correlations between pressure and corresponding peak positions are calculated as well. The results demonstrate that 1) the number of peaks that one chooses to fit the spectrum affects the results, which indicates that the application of the spectroscopic barometry with a function group of organic matters suffers significant limitations; and 2) the linear correlation between pressure and fitted peak positions from one-peak model is more superior than that from multiple-peak model, meanwhile the standard error of the latter is much higher than that of the former. It indicates that the Raman shift of Csbnd H band fitted with one-peak model, which could be treated as a spectroscopic barometry, is more realistic in mixture systems than the traditional strategy which uses the Raman characteristic shift of one function group.
Rare decay of the top quark t{yields}cll and single top quark production at the ILC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Mariana; Turan, Ismail
We perform a complete and detailed analysis of the flavor changing neutral current rare top quark decays t{yields}cl{sup +}l{sup -} and t{yields}c{nu}{sub i}{nu}{sub i} at one-loop level in the standard model, two Higgs doublet models (I and II), and in minimal supersymmetric standard model (MSSM). The branching ratios are very small in all models, O(10{sup -14}), except for the case of the unconstrained MSSM, where they can reach O(10{sup -6}) for e{sup +}e{sup -}, {mu}{sup +}{mu}{sup -}, and {nu}{sub i}{nu}{sub i}, and O(10{sup -5}) for {tau}{sup +}{tau}{sup -}. This branching ratio is comparable to the ones for t{yields}cV, cH. Wemore » also study the production rates of single top and the forward-backward asymmetry in e{sup +}e{sup -}{yields}tc and comment on the observability of such a signal at the International Linear Collider.« less
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Gonçalves, Marcio A D; Tokach, Mike D; Dritz, Steve S; Bello, Nora M; Touchette, Kevin J; Goodband, Robert D; DeRouchey, Joel M; Woodworth, Jason C
2018-03-06
Two experiments were conducted to estimate the standardized ileal digestible valine:lysine (SID Val:Lys) dose response effects in 25- to 45-kg pigs under commercial conditions. In experiment 1, a total of 1,134 gilts (PIC 337 × 1050), initially 31.2 kg ± 2.0 kg body weight (BW; mean ± SD) were used in a 19-d growth trial with 27 pigs per pen and seven pens per treatment. In experiment 2, a total of 2,100 gilts (PIC 327 × 1050), initially 25.4 ± 1.9 kg BW were used in a 22-d growth trial with 25 pigs per pen and 12 pens per treatment. Treatments were blocked by initial BW in a randomized complete block design. In experiment 1, there were a total of six dietary treatments with SID Val at 59.0, 62.5, 65.9, 69.6, 73.0, and 75.5% of Lys and for experiment 2 there were a total of seven dietary treatments with SID Val at 57.0, 60.6, 63.9, 67.5, 71.1, 74.4, and 78.0% of Lys. Experimental diets were formulated to ensure that Lys was the second limiting amino acid throughout the experiments. Initially, linear mixed models were fitted to data from each experiment. Then, data from the two experiments were combined to estimate dose-responses using a broken-line linear ascending (BLL) model, broken-line quadratic ascending (BLQ) model, or quadratic polynomial (QP). Model fit was compared using Bayesian information criterion (BIC). In experiment 1, ADG increased linearly (P = 0.009) with increasing SID Val:Lys with no apparent significant impact on G:F. In experiment 2, ADG and ADFI increased in a quadratic manner (P < 0.002) with increasing SID Val:Lys whereas G:F increased linearly (P < 0.001). Overall, the best-fitting model for ADG was a QP, whereby the maximum mean ADG was estimated at a 73.0% (95% CI: [69.5, >78.0%]) SID Val:Lys. For G:F, the overall best-fitting model was a QP with maximum estimated mean G:F at 69.0% (95% CI: [64.0, >78.0]) SID Val:Lys ratio. However, 99% of the maximum mean performance for ADG and G:F were achieved at, 68% and 63% SID Val:Lys ratio, respectively. Therefore, the SID Val:Lys requirement ranged from73.0% for maximum ADG to 63.2% SID Val:Lys to achieve 99% of maximum G:F in 25- to 45-kg BW pigs.
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A.
2013-01-01
Background Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. Objective We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Design Using cross-sectional data for children aged 0–24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. Results At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Conclusions Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role. PMID:24223839
Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A
2013-01-01
Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Using cross-sectional data for children aged 0-24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role.
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Incorporating concentration dependence in stable isotope mixing models.
Phillips, Donald L; Koch, Paul L
2002-01-01
Stable isotopes are often used as natural labels to quantify the contributions of multiple sources to a mixture. For example, C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model assumes that the proportional contribution of a source to a mixture is the same for both elements (e.g., C, N). This may be a reasonable assumption if the concentrations are similar among all sources. However, one source is often particularly rich or poor in one element (e.g., N), which logically leads to a proportionate increase or decrease in the contribution of that source to the mixture for that element relative to the other element (e.g., C). We have developed a concentration-weighted linear mixing model, which assumes that for each element, a source's contribution is proportional to the contributed mass times the elemental concentration in that source. The model is outlined for two elements and three sources, but can be generalized to n elements and n+1 sources. Sensitivity analyses for C and N in three sources indicated that varying the N concentration of just one source had large and differing effects on the estimated source contributions of mass, C, and N. The same was true for a case study of bears feeding on salmon, moose, and N-poor plants. In this example, the estimated biomass contribution of salmon from the concentration-weighted model was markedly less than the standard model estimate. Application of the model to a captive feeding study of captive mink fed on salmon, lean beef, and C-rich, N-poor beef fat reproduced very closely the known dietary proportions, whereas the standard model failed to yield a set of positive source proportions. Use of this concentration-weighted model is recommended whenever the elemental concentrations vary substantially among the sources, which may occur in a variety of ecological and geochemical applications of stable isotope analysis. Possible examples besides dietary and food web studies include stable isotope analysis of water sources in soils, plants, or water bodies; geological sources for soils or marine systems; decomposition and soil organic matter dynamics, and tracing animal migration patterns. A spreadsheet for performing the calculations for this model is available at http://www.epa.gov/wed/pages/models.htm.
Trägårdh, M; Lindén, D; Ploj, K; Johansson, A; Turnbull, A; Carlsson, B; Antonsson, M
2017-01-01
In this study, we present the translational modeling used in the discovery of AZD1979, a melanin‐concentrating hormone receptor 1 (MCHr1) antagonist aimed for treatment of obesity. The model quantitatively connects the relevant biomarkers and thereby closes the scaling path from rodent to man, as well as from dose to effect level. The complexity of individual modeling steps depends on the quality and quantity of data as well as the prior information; from semimechanistic body‐composition models to standard linear regression. Key predictions are obtained by standard forward simulation (e.g., predicting effect from exposure), as well as non‐parametric input estimation (e.g., predicting energy intake from longitudinal body‐weight data), across species. The work illustrates how modeling integrates data from several species, fills critical gaps between biomarkers, and supports experimental design and human dose‐prediction. We believe this approach can be of general interest for translation in the obesity field, and might inspire translational reasoning more broadly. PMID:28556607
Graphite grain-size spectrum and molecules from core-collapse supernovae
NASA Astrophysics Data System (ADS)
Clayton, Donald D.; Meyer, Bradley S.
2018-01-01
Our goal is to compute the abundances of carbon atomic complexes that emerge from the C + O cores of core-collapse supernovae. We utilize our chemical reaction network in which every atomic step of growth employs a quantum-mechanically guided reaction rate. This tool follows step-by-step the growth of linear carbon chain molecules from C atoms in the oxygen-rich C + O cores. We postulate that once linear chain molecules reach a sufficiently large size, they isomerize to ringed molecules, which serve as seeds for graphite grain growth. We demonstrate our technique for merging the molecular reaction network with a parallel program that can follow 1017 steps of C addition onto the rare seed species. Due to radioactivity within the C + O core, abundant ambient oxygen is unable to convert C to CO, except to a limited degree that actually facilitates carbon molecular ejecta. But oxygen severely minimizes the linear-carbon-chain abundances. Despite the tiny abundances of these linear-carbon-chain molecules, they can give rise to a small abundance of ringed-carbon molecules that serve as the nucleations on which graphite grain growth builds. We expand the C + O-core gas adiabatically from 6000 K for 109 s when reactions have essentially stopped. These adiabatic tracks emulate the actual expansions of the supernova cores. Using a standard model of 1056 atoms of C + O core ejecta having O/C = 3, we calculate standard ejection yields of graphite grains of all sizes produced, of the CO molecular abundance, of the abundances of linear-carbon molecules, and of Buckminsterfullerene. None of these except CO was expected from the C + O cores just a few years past.
Using linear programming to minimize the cost of nurse personnel.
Matthews, Charles H
2005-01-01
Nursing personnel costs make up a major portion of most hospital budgets. This report evaluates and optimizes the utility of the nurse personnel at the Internal Medicine Outpatient Clinic of Wake Forest University Baptist Medical Center. Linear programming (LP) was employed to determine the effective combination of nurses that would allow for all weekly clinic tasks to be covered while providing the lowest possible cost to the department. Linear programming is a standard application of standard spreadsheet software that allows the operator to establish the variables to be optimized and then requires the operator to enter a series of constraints that will each have an impact on the ultimate outcome. The application is therefore able to quantify and stratify the nurses necessary to execute the tasks. With the report, a specific sensitivity analysis can be performed to assess just how sensitive the outcome is to the stress of adding or deleting a nurse to or from the payroll. The nurse employee cost structure in this study consisted of five certified nurse assistants (CNA), three licensed practicing nurses (LPN), and five registered nurses (RN). The LP revealed that the outpatient clinic should staff four RNs, three LPNs, and four CNAs with 95 percent confidence of covering nurse demand on the floor. This combination of nurses would enable the clinic to: 1. Reduce annual staffing costs by 16 percent; 2. Force each level of nurse to be optimally productive by focusing on tasks specific to their expertise; 3. Assign accountability more efficiently as the nurses adhere to their specific duties; and 4. Ultimately provide a competitive advantage to the clinic as it relates to nurse employee and patient satisfaction. Linear programming can be used to solve capacity problems for just about any staffing situation, provided the model is indeed linear.
A Dynamic Approach to Monitoring Particle Fallout in a Cleanroom Environment
NASA Technical Reports Server (NTRS)
Perry, Radford L., III
2010-01-01
This slide presentation discusses a mathematical model to monitor particle fallout in a cleanroom. "Cleanliness levels" do not lead to increases with regards to cleanroom type or time because the levels are not linear. Activity level, impacts the cleanroom class. The numerical method presented leads to a simple Class-hour formulation, that allows for dynamic monitoring of the particle using a standard air particle counter.
2007-01-01
125- 134. 1999 12. International GNNS Service: http://igscb.jpl.nasa.gov 13. P. Koppang , and R. Leland , Steering of Frequency Standards by use of...November-1 December, San Diego, California, NASA CP-3334, pp. 257-267, 1996 14. P. Koppang and R. Leland , Linear Quadratic Stochastic Control of Atomic...the Electrodynamics of Moving Bodies, Annalen der Physik, 17(1905), pp. 891-921,1905 9. J. Skinner and P. Koppang , Analysis of Clock Modeling
Phase field benchmark problems for dendritic growth and linear elasticity
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...
2018-03-26
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Phase field benchmark problems for dendritic growth and linear elasticity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Many-core graph analytics using accelerated sparse linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric
2016-05-01
Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.
NASA Technical Reports Server (NTRS)
Clark, William S.; Hall, Kenneth C.
1994-01-01
A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.
Reppas-Chrysovitsinos, Efstathios; Sobek, Anna; MacLeod, Matthew
2016-06-15
Polymeric materials flowing through the technosphere are repositories of organic chemicals throughout their life cycle. Equilibrium partition ratios of organic chemicals between these materials and air (KMA) or water (KMW) are required for models of fate and transport, high-throughput exposure assessment and passive sampling. KMA and KMW have been measured for a growing number of chemical/material combinations, but significant data gaps still exist. We assembled a database of 363 KMA and 910 KMW measurements for 446 individual compounds and nearly 40 individual polymers and biopolymers, collected from 29 studies. We used the EPI Suite and ABSOLV software packages to estimate physicochemical properties of the compounds and we employed an empirical correlation based on Trouton's rule to adjust the measured KMA and KMW values to a standard reference temperature of 298 K. Then, we used a thermodynamic triangle with Henry's law constant to calculate a complete set of 1273 KMA and KMW values. Using simple linear regression, we developed a suite of single parameter linear free energy relationship (spLFER) models to estimate KMA from the EPI Suite-estimated octanol-air partition ratio (KOA) and KMW from the EPI Suite-estimated octanol-water (KOW) partition ratio. Similarly, using multiple linear regression, we developed a set of polyparameter linear free energy relationship (ppLFER) models to estimate KMA and KMW from ABSOLV-estimated Abraham solvation parameters. We explored the two LFER approaches to investigate (1) their performance in estimating partition ratios, and (2) uncertainties associated with treating all different polymers as a single "bulk" polymeric material compartment. The models we have developed are suitable for screening assessments of the tendency for organic chemicals to be emitted from materials, and for use in multimedia models of the fate of organic chemicals in the indoor environment. In screening applications we recommend that KMA and KMW be modeled as 0.06 ×KOA and 0.06 ×KOW respectively, with an uncertainty range of a factor of 15.
Shen, Zaiyi; Würger, Alois; Lintuvuori, Juho S
2018-03-27
Using lattice Boltzmann simulations we study the hydrodynamics of an active spherical particle near a no-slip wall. We develop a computational model for an active Janus particle, by considering different and independent mobilities on the two hemispheres and compare the behaviour to a standard squirmer model. We show that the topology of the far-field hydrodynamic nature of the active Janus particle is similar to the standard squirmer model, but in the near-field the hydrodynamics differ. In order to study how the near-field effects affect the interaction between the particle and a flat wall, we compare the behaviour of a Janus swimmer and a squirmer near a no-slip surface via extensive numerical simulations. Our results show generally a good agreement between these two models, but they reveal some key differences especially with low magnitudes of the squirming parameter [Formula: see text]. Notably the affinity of the particles to be trapped at a surface is increased for the active Janus particles when compared to standard squirmers. Finally, we find that when the particle is trapped on the surface, the velocity parallel to the surface exceeds the bulk swimming speed and scales linearly with [Formula: see text].
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.
Astvad, Karen Marie Thyssen; Meletiadis, Joseph; Whalley, Sarah
2017-01-01
ABSTRACT The invertebrate model organism Galleria mellonella can be used to assess the efficacy of treatment of fungal infection. The fluconazole dose best mimicking human exposure during licensed dosing is unknown. We validated a bioassay for fluconazole detection in hemolymph and determined the fluconazole pharmacokinetics and pharmacodynamics in larval hemolymph in order to estimate a humanized dose for future experiments. A bioassay using 4-mm agar wells, 20 μl hemolymph, and the hypersusceptible Candida albicans DSY2621 was established and compared to a validated liquid chromatography-tandem mass spectrometry (LC–MS-MS) method. G. mellonella larvae were injected with fluconazole (5, 10, and 20 mg/kg of larval weight), and hemolymph was harvested for 24 h for pharmacokinetics calculations. The exposure was compared to the human exposure during standard licensed dosing. The bioassay had a linear standard curve between 1 and 20 mg/liter. Accuracy and coefficients of variation (percent) values were below 10%. The Spearman coefficient between assays was 0.94. Fluconazole larval pharmacokinetics followed one-compartment linear kinetics, with the 24-h area under the hemolymph concentration-time curve (AUC24 h) being 93, 173, and 406 mg · h/liter for the three doses compared to 400 mg · h/liter in humans under licensed treatment. In conclusion, a bioassay was validated for fluconazole determination in hemolymph. The pharmacokinetics was linear. An exposure comparable to the human exposure during standard licensed dosing was obtained with 20 mg/kg. PMID:28760893
Large scale structures in the kinetic gravity braiding model that can be unbraided
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, Rampei; Yamamoto, Kazuhiro, E-mail: rampei@theo.phys.sci.hiroshima-u.ac.jp, E-mail: kazuhiro@hiroshima-u.ac.jp
2011-04-01
We study cosmological consequences of a kinetic gravity braiding model, which is proposed as an alternative to the dark energy model. The kinetic braiding model we study is characterized by a parameter n, which corresponds to the original galileon cosmological model for n = 1. We find that the background expansion of the universe of the kinetic braiding model is the same as the Dvali-Turner's model, which reduces to that of the standard cold dark matter model with a cosmological constant (ΛCDM model) for n equal to infinity. We also find that the evolution of the linear cosmological perturbation inmore » the kinetic braiding model reduces to that of the ΛCDM model for n = ∞. Then, we focus our study on the growth history of the linear density perturbation as well as the spherical collapse in the nonlinear regime of the density perturbations, which might be important in order to distinguish between the kinetic braiding model and the ΛCDM model when n is finite. The theoretical prediction for the large scale structure is confronted with the multipole power spectrum of the luminous red galaxy sample of the Sloan Digital Sky survey. We also discuss future prospects of constraining the kinetic braiding model using a future redshift survey like the WFMOS/SuMIRe PFS survey as well as the cluster redshift distribution in the South Pole Telescope survey.« less
Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs
McFarland, James M.; Cui, Yuwei; Butts, Daniel A.
2013-01-01
The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation. PMID:23874185
Statistical power for detecting trends with applications to seabird monitoring
Hatch, Shyla A.
2003-01-01
Power analysis is helpful in defining goals for ecological monitoring and evaluating the performance of ongoing efforts. I examined detection standards proposed for population monitoring of seabirds using two programs (MONITOR and TRENDS) specially designed for power analysis of trend data. Neither program models within- and among-years components of variance explicitly and independently, thus an error term that incorporates both components is an essential input. Residual variation in seabird counts consisted of day-to-day variation within years and unexplained variation among years in approximately equal parts. The appropriate measure of error for power analysis is the standard error of estimation (S.E.est) from a regression of annual means against year. Replicate counts within years are helpful in minimizing S.E.est but should not be treated as independent samples for estimating power to detect trends. Other issues include a choice of assumptions about variance structure and selection of an exponential or linear model of population change. Seabird count data are characterized by strong correlations between S.D. and mean, thus a constant CV model is appropriate for power calculations. Time series were fit about equally well with exponential or linear models, but log transformation ensures equal variances over time, a basic assumption of regression analysis. Using sample data from seabird monitoring in Alaska, I computed the number of years required (with annual censusing) to detect trends of -1.4% per year (50% decline in 50 years) and -2.7% per year (50% decline in 25 years). At ??=0.05 and a desired power of 0.9, estimated study intervals ranged from 11 to 69 years depending on species, trend, software, and study design. Power to detect a negative trend of 6.7% per year (50% decline in 10 years) is suggested as an alternative standard for seabird monitoring that achieves a reasonable match between statistical and biological significance.
Parametric resonance in the early Universe—a fitting analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es
Particle production via parametric resonance in the early Universe, is a non-perturbative, non-linear and out-of-equilibrium phenomenon. Although it is a well studied topic, whenever a new scenario exhibits parametric resonance, a full re-analysis is normally required. To avoid this tedious task, many works present often only a simplified linear treatment of the problem. In order to surpass this circumstance in the future, we provide a fitting analysis of parametric resonance through all its relevant stages: initial linear growth, non-linear evolution, and relaxation towards equilibrium. Using lattice simulations in an expanding grid in 3+1 dimensions, we parametrize the dynamics' outcome scanningmore » over the relevant ingredients: role of the oscillatory field, particle coupling strength, initial conditions, and background expansion rate. We emphasize the inaccuracy of the linear calculation of the decay time of the oscillatory field, and propose a more appropriate definition of this scale based on the subsequent non-linear dynamics. We provide simple fits to the relevant time scales and particle energy fractions at each stage. Our fits can be applied to post-inflationary preheating scenarios, where the oscillatory field is the inflaton, or to spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less
Groth, Kevin M; Granata, Kevin P
2008-06-01
Due to the mathematical complexity of current musculoskeletal spine models, there is a need for computationally efficient models of the intervertebral disk (IVD). The aim of this study is to develop a mathematical model that will adequately describe the motion of the IVD under axial cyclic loading as well as maintain computational efficiency for use in future musculoskeletal spine models. Several studies have successfully modeled the creep characteristics of the IVD using the three-parameter viscoelastic standard linear solid (SLS) model. However, when the SLS model is subjected to cyclic loading, it underestimates the load relaxation, the cyclic modulus, and the hysteresis of the human lumbar IVD. A viscoelastic standard nonlinear solid (SNS) model was used to predict the response of the human lumbar IVD subjected to low-frequency vibration. Nonlinear behavior of the SNS model was simulated by a strain-dependent elastic modulus on the SLS model. Parameters of the SNS model were estimated from experimental load deformation and stress-relaxation curves obtained from the literature. The SNS model was able to predict the cyclic modulus of the IVD at frequencies of 0.01 Hz, 0.1 Hz, and 1 Hz. Furthermore, the SNS model was able to quantitatively predict the load relaxation at a frequency of 0.01 Hz. However, model performance was unsatisfactory when predicting load relaxation and hysteresis at higher frequencies (0.1 Hz and 1 Hz). The SLS model of the lumbar IVD may require strain-dependent elastic and viscous behavior to represent the dynamic response to compressive strain.
NASA Astrophysics Data System (ADS)
Goyal, Deepak
Textile composites have a wide variety of applications in the aerospace, sports, automobile, marine and medical industries. Due to the availability of a variety of textile architectures and numerous parameters associated with each, optimal design through extensive experimental testing is not practical. Predictive tools are needed to perform virtual experiments of various options. The focus of this research is to develop a better understanding of linear elastic response, plasticity and material damage induced nonlinear behavior and mechanics of load flow in textile composites. Textile composites exhibit multiple scales of complexity. The various textile behaviors are analyzed using a two-scale finite element modeling. A framework to allow use of a wide variety of damage initiation and growth models is proposed. Plasticity induced non-linear behavior of 2x2 braided composites is investigated using a modeling approach based on Hill's yield function for orthotropic materials. The mechanics of load flow in textile composites is demonstrated using special non-standard postprocessing techniques that not only highlight the important details, but also transform the extensive amount of output data into comprehensible modes of behavior. The investigations show that the damage models differ from each other in terms of amount of degradation as well as the properties to be degraded under a particular failure mode. When compared with experimental data, predictions of some models match well for glass/epoxy composite whereas other's match well for carbon/epoxy composites. However, all the models predicted very similar response when damage factors were made similar, which shows that the magnitude of damage factors are very important. Full 3D as well as equivalent tape laminate predictions lie within the range of the experimental data for a wide variety of braided composites with different material systems, which validated the plasticity analysis. Conclusions about the effect of fiber type on the degree of plasticity induced non-linearity in a +/-25° braid depend on the measure of non-linearity. Investigations about the mechanics of load flow in textile composites bring new insights about the textile behavior. For example, the reasons for existence of transverse shear stress under uni-axial loading and occurrence of stress concentrations at certain locations were explained.
National economic models of industrial water use and waste treatment. [technology transfer
NASA Technical Reports Server (NTRS)
Thompson, R. G.; Calloway, J. A.
1974-01-01
The effects of air emission and solid waste restrictions on production costs and resource use by industry is investigated. A linear program is developed to analyze how resource use, production cost, and waste discharges in different types of production may be affected by resource limiting policies of the government. The method is applied to modeling ethylene and ammonia plants at the design stage. Results show that the effects of increasingly restrictive wastewater effluent standards on increased energy use were small in both plants. Plant models were developed for other industries and the program estimated effects of wastewater discharge policies on production costs of industry.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
On the LHC sensitivity for non-thermalised hidden sectors
NASA Astrophysics Data System (ADS)
Kahlhoefer, Felix
2018-04-01
We show under rather general assumptions that hidden sectors that never reach thermal equilibrium in the early Universe are also inaccessible for the LHC. In other words, any particle that can be produced at the LHC must either have been in thermal equilibrium with the Standard Model at some point or must be produced via the decays of another hidden sector particle that has been in thermal equilibrium. To reach this conclusion, we parametrise the cross section connecting the Standard Model to the hidden sector in a very general way and use methods from linear programming to calculate the largest possible number of LHC events compatible with the requirement of non-thermalisation. We find that even the HL-LHC cannot possibly produce more than a few events with energy above 10 GeV involving states from a non-thermalised hidden sector.
NASA Astrophysics Data System (ADS)
Louda, Petr; Straka, Petr; Příhoda, Jaromír
2018-06-01
The contribution deals with the numerical simulation of transonic flows through a linear turbine blade cascade. Numerical simulations were carried partly for the standard computational domain with various outlet boundary conditions by the algebraic transition model of Straka and Příhoda [1] connected with the EARSM turbulence model of Hellsten [2] and partly for the computational domain corresponding to the geometrical arrangement in the wind tunnel by the γ-ζ transition model of Dick et al. [3] with the SST turbulence model. Numerical results were compared with experimental data. The agreement of numerical results with experimental results is acceptable through a complicated experimental configuration.
Age estimation standards for a Western Australian population using the coronal pulp cavity index.
Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel
2013-09-10
Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
An improved interfacial bonding model for material interface modeling
Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei
2016-01-01
An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343
NASA Astrophysics Data System (ADS)
Preynas, M.; Goniche, M.; Hillairet, J.; Litaudon, X.; Ekedahl, A.; Colas, L.
2013-01-01
To achieve steady-state operation on future fusion devices, in particular on ITER, the coupling of the lower hybrid wave must be optimized on a wide range of edge conditions. However, under some specific conditions, deleterious effects on the lower hybrid current drive (LHCD) coupling are sometimes observed on Tore Supra. In this way, dedicated LHCD experiments have been performed using the LHCD system of Tore Supra, composed of two different conceptual designs of launcher: the fully active multi-junction (FAM) and the new passive active multi-junction (PAM) antennas. A non-linear interaction between the electron density and the electric field has been characterized in a thin plasma layer in front of the two LHCD antennas. The resulting dependence of the power reflection coefficient (RC) with the LHCD power is not predicted by the standard linear theory of the LH wave coupling. A theoretical model is suggested to describe the non-linear wave-plasma interaction induced by the ponderomotive effect and implemented in a new full wave LHCD code, PICCOLO-2D (ponderomotive effect in a coupling code of lower hybrid wave-2D). The code self-consistently treats the wave propagation in the antenna vicinity and its interaction with the local edge plasma density. The simulation reproduces very well the occurrence of a non-linear behaviour in the coupling observed in the LHCD experiments. The important differences and trends between the FAM and the PAM antennas, especially a larger increase in RC for the FAM, are also reproduced by the PICCOLO-2D simulation. The working hypothesis of the contribution of the ponderomotive effect in the non-linear observations of LHCD coupling is therefore validated through this comprehensive modelling for the first time on the FAM and PAM antennas on Tore Supra.
Twist Model Development and Results from the Active Aeroelastic Wing F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lizotte, Andrew M.; Allen, Michael J.
2007-01-01
Understanding the wing twist of the active aeroelastic wing (AAW) F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption. This technique produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.
Stacul, Stefano; Squeglia, Nunziante
2018-02-15
A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ.