ERIC Educational Resources Information Center
Li, Deping; Oranje, Andreas
2007-01-01
Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
On the Linear Relation between the Mean and the Standard Deviation of a Response Time Distribution
ERIC Educational Resources Information Center
Wagenmakers, Eric-Jan; Brown, Scott
2007-01-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different…
A Constrained Linear Estimator for Multiple Regression
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.
2010-01-01
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
Henry, B I; Langlands, T A M; Wearne, S L
2006-09-01
We have revisited the problem of anomalously diffusing species, modeled at the mesoscopic level using continuous time random walks, to include linear reaction dynamics. If a constant proportion of walkers are added or removed instantaneously at the start of each step then the long time asymptotic limit yields a fractional reaction-diffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps then the long time asymptotic limit has a standard linear reaction kinetics term but a fractional order temporal derivative operating on a nonstandard diffusion term. Results from the above two models are compared with a phenomenological model with standard linear reaction kinetics and a fractional order temporal derivative operating on a standard diffusion term. We have also developed further extensions of the CTRW model to include more general reaction dynamics.
On the linear relation between the mean and the standard deviation of a response time distribution.
Wagenmakers, Eric-Jan; Brown, Scott
2007-07-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.
Credibility analysis of risk classes by generalized linear model
NASA Astrophysics Data System (ADS)
Erdemir, Ovgucan Karadag; Sucu, Meral
2016-06-01
In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.
Statistical inference for template aging
NASA Astrophysics Data System (ADS)
Schuckers, Michael E.
2006-04-01
A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
Cosmological power spectrum in a noncommutative spacetime
NASA Astrophysics Data System (ADS)
Kothari, Rahul; Rath, Pranati K.; Jain, Pankaj
2016-09-01
We propose a generalized star product that deviates from the standard one when the fields are considered at different spacetime points by introducing a form factor in the standard star product. We also introduce a recursive definition by which we calculate the explicit form of the generalized star product at any number of spacetime points. We show that our generalized star product is associative and cyclic at linear order. As a special case, we demonstrate that our recursive approach can be used to prove the associativity of standard star products for same or different spacetime points. The introduction of a form factor has no effect on the standard Lagrangian density in a noncommutative spacetime because it reduces to the standard star product when spacetime points become the same. We show that the generalized star product leads to physically consistent results and can fit the observed data on hemispherical anisotropy in the cosmic microwave background radiation.
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen
2010-03-01
The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.
Fast wavelet based algorithms for linear evolution equations
NASA Technical Reports Server (NTRS)
Engquist, Bjorn; Osher, Stanley; Zhong, Sifen
1992-01-01
A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.
Critical N = (1, 1) general massive supergravity
NASA Astrophysics Data System (ADS)
Deger, Nihat Sadik; Moutsopoulos, George; Rosseel, Jan
2018-04-01
In this paper we study the supermultiplet structure of N = (1, 1) General Massive Supergravity at non-critical and critical points of its parameter space. To do this, we first linearize the theory around its maximally supersymmetric AdS3 vacuum and obtain the full linearized Lagrangian including fermionic terms. At generic values, linearized modes can be organized as two massless and 2 massive multiplets where supersymmetry relates them in the standard way. At critical points logarithmic modes appear and we find that in three of such points some of the supersymmetry transformations are non-invertible in logarithmic multiplets. However, in the fourth critical point, there is a massive logarithmic multiplet with invertible supersymmetry transformations.
NASA Technical Reports Server (NTRS)
Zimmerle, D.; Bernhard, R. J.
1985-01-01
An alternative method for performing singular boundary element integrals for applications in linear acoustics is discussed. The method separates the integral of the characteristic solution into a singular and nonsingular part. The singular portion is integrated with a combination of analytic and numerical techniques while the nonsingular portion is integrated with standard Gaussian quadrature. The method may be generalized to many types of subparametric elements. The integrals over elements containing the root node are considered, and the characteristic solution for linear acoustic problems are examined. The method may be generalized to most characteristic solutions.
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
Langenbucher, Frieder
2005-01-01
A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
ERIC Educational Resources Information Center
Fulmer, Gavin W.; Polikoff, Morgan S.
2014-01-01
An essential component in school accountability efforts is for assessments to be well-aligned with the standards or curriculum they are intended to measure. However, relatively little prior research has explored methods to determine statistical significance of alignment or misalignment. This study explores analyses of alignment as a special case…
The Use of Structure Coefficients to Address Multicollinearity in Sport and Exercise Science
ERIC Educational Resources Information Center
Yeatts, Paul E.; Barton, Mitch; Henson, Robin K.; Martin, Scott B.
2017-01-01
A common practice in general linear model (GLM) analyses is to interpret regression coefficients (e.g., standardized ß weights) as indicators of variable importance. However, focusing solely on standardized beta weights may provide limited or erroneous information. For example, ß weights become increasingly unreliable when predictor variables are…
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
NASA Technical Reports Server (NTRS)
Cheyney, H., III; Arking, A.
1976-01-01
The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.
Generalized t-statistic for two-group classification.
Komori, Osamu; Eguchi, Shinto; Copas, John B
2015-06-01
In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Allen, G.
1972-01-01
The use of the theta-operator method and generalized hypergeometric functions in obtaining solutions to nth-order linear ordinary differential equations is explained. For completeness, the analysis of the differential equation to determine whether the point of expansion is an ordinary point or a regular singular point is included. The superiority of the two methods shown over the standard method is demonstrated by using all three of the methods to work out several examples. Also included is a compendium of formulae and properties of the theta operator and generalized hypergeometric functions which is complete enough to make the report self-contained.
Validating the applicability of the GUM procedure
NASA Astrophysics Data System (ADS)
Cox, Maurice G.; Harris, Peter M.
2014-08-01
This paper is directed at practitioners seeking a degree of assurance in the quality of the results of an uncertainty evaluation when using the procedure in the Guide to the Expression of Uncertainty in Measurement (GUM) (JCGM 100 : 2008). Such assurance is required in adhering to general standards such as International Standard ISO/IEC 17025 or other sector-specific standards. We investigate the extent to which such assurance can be given. For many practical cases, a measurement result incorporating an evaluated uncertainty that is correct to one significant decimal digit would be acceptable. Any quantification of the numerical precision of an uncertainty statement is naturally relative to the adequacy of the measurement model and the knowledge used of the quantities in that model. For general univariate and multivariate measurement models, we emphasize the use of a Monte Carlo method, as recommended in GUM Supplements 1 and 2. One use of this method is as a benchmark in terms of which measurement results provided by the GUM can be assessed in any particular instance. We mainly consider measurement models that are linear in the input quantities, or have been linearized and the linearization process is deemed to be adequate. When the probability distributions for those quantities are independent, we indicate the use of other approaches such as convolution methods based on the fast Fourier transform and, particularly, Chebyshev polynomials as benchmarks.
Trinker, Horst
2011-10-28
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.
Speed-of-light limitations in passive linear media
NASA Astrophysics Data System (ADS)
Welters, Aaron; Avniel, Yehuda; Johnson, Steven G.
2014-08-01
We prove that well-known speed-of-light restrictions on electromagnetic energy velocity can be extended to a new level of generality, encompassing even nonlocal chiral media in periodic geometries, while at the same time weakening the underlying assumptions to only passivity and linearity of the medium (either with a transparency window or with dissipation). As was also shown by other authors under more limiting assumptions, passivity alone is sufficient to guarantee causality and positivity of the energy density (with no thermodynamic assumptions). Our proof is general enough to include a very broad range of material properties, including anisotropy, bianisotropy (chirality), nonlocality, dispersion, periodicity, and even delta functions or similar generalized functions. We also show that the "dynamical energy density" used by some previous authors in dissipative media reduces to the standard Brillouin formula for dispersive energy density in a transparency window. The results in this paper are proved by exploiting deep results from linear-response theory, harmonic analysis, and functional analysis that had previously not been brought together in the context of electrodynamics.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Advances in diagnostic ultrasonography.
Reef, V B
1991-08-01
A wide variety of ultrasonographic equipment currently is available for use in equine practice, but no one machine is optimal for every type of imaging. Image quality is the most important factor in equipment selection once the needs of the practitioner are ascertained. The transducer frequencies available, transducer footprints, depth of field displayed, frame rate, gray scale, simultaneous electrocardiography, Doppler, and functions to modify the image are all important considerations. The ability to make measurements off of videocassette recorder playback and future upgradability should be evaluated. Linear array and sector technology are the backbone of equine ultrasonography today. Linear array technology is most useful for a high-volume broodmare practice, whereas sector technology is ideal for a more general equine practice. The curved or convex linear scanner has more applications than the standard linear array and is equipped with the linear array rectal probe, which provides the equine practitioner with a more versatile unit for equine ultrasonographic evaluations. The annular array and phased array systems have improved image quality, but each has its own limitations. The new sector scanners still provide the most versatile affordable equipment for equine general practice.
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
Nonlinear and linear wave equations for propagation in media with frequency power law losses
NASA Astrophysics Data System (ADS)
Szabo, Thomas L.
2003-10-01
The Burgers, KZK, and Westervelt wave equations used for simulating wave propagation in nonlinear media are based on absorption that has a quadratic dependence on frequency. Unfortunately, most lossy media, such as tissue, follow a more general frequency power law. The authors first research involved measurements of loss and dispersion associated with a modification to Blackstock's solution to the linear thermoviscous wave equation [J. Acoust. Soc. Am. 41, 1312 (1967)]. A second paper by Blackstock [J. Acoust. Soc. Am. 77, 2050 (1985)] showed the loss term in the Burgers equation for plane waves could be modified for other known instances of loss. The authors' work eventually led to comprehensive time-domain convolutional operators that accounted for both dispersion and general frequency power law absorption [Szabo, J. Acoust. Soc. Am. 96, 491 (1994)]. Versions of appropriate loss terms were developed to extend the standard three nonlinear wave equations to these more general losses. Extensive experimental data has verified the predicted phase velocity dispersion for different power exponents for the linear case. Other groups are now working on methods suitable for solving wave equations numerically for these types of loss directly in the time domain for both linear and nonlinear media.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Alonso, Rodrigo; Jenkins, Elizabeth E.; Manohar, Aneesh V.
2016-08-17
The S-matrix of a quantum field theory is unchanged by field redefinitions, and so it only depends on geometric quantities such as the curvature of field space. Whether the Higgs multiplet transforms linearly or non-linearly under electroweak symmetry is a subtle question since one can make a coordinate change to convert a field that transforms linearly into one that transforms non-linearly. Renormalizability of the Standard Model (SM) does not depend on the choice of scalar fields or whether the scalar fields transform linearly or non-linearly under the gauge group, but only on the geometric requirement that the scalar field manifoldmore » M is flat. Standard Model Effective Field Theory (SMEFT) and Higgs Effective Field Theory (HEFT) have curved M, since they parametrize deviations from the flat SM case. We show that the HEFT Lagrangian can be written in SMEFT form if and only ifMhas a SU(2) L U(1) Y invariant fixed point. Experimental observables in HEFT depend on local geometric invariants of M such as sectional curvatures, which are of order 1/Λ 2 , where Λ is the EFT scale. We give explicit expressions for these quantities in terms of the structure constants for a general G → H symmetry breaking pattern. The one-loop radiative correction in HEFT is determined using a covariant expansion which preserves manifest invariance of M under coordinate redefinitions. The formula for the radiative correction is simple when written in terms of the curvature of M and the gauge curvature field strengths. We also extend the CCWZ formalism to non-compact groups, and generalize the HEFT curvature computation to the case of multiple singlet scalar fields.« less
NASA Astrophysics Data System (ADS)
Al Roumi, Fosca; Buchert, Thomas; Wiegand, Alexander
2017-12-01
The relativistic generalization of the Newtonian Lagrangian perturbation theory is investigated. In previous works, the perturbation and solution schemes that are generated by the spatially projected gravitoelectric part of the Weyl tensor were given to any order of the perturbations, together with extensions and applications for accessing the nonperturbative regime. We here discuss more in detail the general first-order scheme within the Cartan formalism including and concentrating on the gravitational wave propagation in matter. We provide master equations for all parts of Lagrangian-linearized perturbations propagating in the perturbed spacetime, and we outline the solution procedure that allows one to find general solutions. Particular emphasis is given to global properties of the Lagrangian perturbation fields by employing results of Hodge-de Rham theory. We here discuss how the Hodge decomposition relates to the standard scalar-vector-tensor decomposition. Finally, we demonstrate that we obtain the known linear perturbation solutions of the standard relativistic perturbation scheme by performing two steps: first, by restricting our solutions to perturbations that propagate on a flat unperturbed background spacetime and, second, by transforming to Eulerian background coordinates with truncation of nonlinear terms.
Hang, Chao; Huang, Guoxiang; Deng, L
2006-03-01
We investigate the influence of high-order dispersion and nonlinearity on the propagation of ultraslow optical solitons in a lifetime broadened four-state atomic system under a Raman excitation. Using a standard method of multiple-scales we derive a generalized nonlinear Schrödinger equation and show that for realistic physical parameters and at the pulse duration of 10(-6)s, the effects of third-order linear dispersion, nonlinear dispersion, and delay in nonlinear refractive index can be significant and may not be considered as perturbations. We provide exact soliton solutions for the generalized nonlinear Schrödinger equation and demonstrate that optical solitons obtained may still have ultraslow propagating velocity. Numerical simulations on the stability and interaction of these ultraslow optical solitons in the presence of linear and differential absorptions are also presented.
General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles
NASA Astrophysics Data System (ADS)
Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J.
2017-09-01
The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Hannich, M; Wallaschofski, H; Nauck, M; Reincke, M; Adolf, C; Völzke, H; Rettig, R; Hannemann, A
2018-01-01
Aldosterone and high-density lipoprotein cholesterol (HDL-C) are involved in many pathophysiological processes that contribute to the development of cardiovascular diseases. Previously, associations between the concentrations of aldosterone and certain components of the lipid metabolism in the peripheral circulation were suggested, but data from the general population is sparse. We therefore aimed to assess the associations between aldosterone and HDL-C, low-density lipoprotein cholesterol (LDL-C), total cholesterol, triglycerides, or non-HDL-C in the general adult population. Data from 793 men and 938 women aged 25-85 years who participated in the first follow-up of the Study of Health in Pomerania were obtained. The associations of aldosterone with serum lipid concentrations were assessed in multivariable linear regression models adjusted for sex, age, body mass index (BMI), estimated glomerular filtration rate (eGFR), and HbA1c. The linear regression models showed statistically significant positive associations of aldosterone with LDL-C ( β -coefficient = 0.022, standard error = 0.010, p = 0.03) and non-HDL-C ( β -coefficient = 0.023, standard error = 0.009, p = 0.01) as well as an inverse association of aldosterone with HDL-C ( β -coefficient = -0.022, standard error = 0.011, p = 0.04). The present data show that plasma aldosterone is positively associated with LDL-C and non-HDL-C and inversely associated with HDL-C in the general population. Our data thus suggests that aldosterone concentrations within the physiological range may be related to alterations of lipid metabolism.
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
NASA Technical Reports Server (NTRS)
Jermey, C.; Schiff, L. B.
1985-01-01
A series of wind-tunnel tests have been conducted on the Standard Dynamics Model (a simplified generic fighter-aircraft shape) undergoing coning motion at Mach 0.6. Six-component force and moment data are presented for a range of angles of attack, sideslip and coning rates. At the relatively low nondimensional coning rates employed, the lateral aerodynamic charactersitics generally show a linear variation with coning rate.
Wigner functions on non-standard symplectic vector spaces
NASA Astrophysics Data System (ADS)
Dias, Nuno Costa; Prata, João Nuno
2018-01-01
We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.
Universality of Generalized Bunching and Efficient Assessment of Boson Sampling.
Shchesnovich, V S
2016-03-25
It is found that identical bosons (fermions) show a generalized bunching (antibunching) property in linear networks: the absolute maximum (minimum) of the probability that all N input particles are detected in a subset of K output modes of any nontrivial linear M-mode network is attained only by completely indistinguishable bosons (fermions). For fermions K is arbitrary; for bosons it is either (i) arbitrary for only classically correlated bosons or (ii) satisfies K≥N (or K=1) for arbitrary input states of N particles. The generalized bunching allows us to certify in a polynomial in N number of runs that a physical device realizing boson sampling with an arbitrary network operates in the regime of full quantum coherence compatible only with completely indistinguishable bosons. The protocol needs only polynomial classical computations for the standard boson sampling, whereas an analytic formula is available for the scattershot version.
Nonlinear resonance scattering of femtosecond X-ray pulses on atoms in plasmas
NASA Astrophysics Data System (ADS)
Rosmej, F. B.; Astapenko, V. A.; Lisitsa, V. S.; Moroz, N. N.
2017-11-01
It is shown that for sufficiently short pulses the resonance scattering probability becomes a nonlinear function of the pulse duration. For fs X-ray pulses scattered on atoms in plasmas maxima and minima develop in the nonlinear regime whereas in the limit of long pulses the probability becomes linear and turns over into the standard description of the electromagnetic pulse scattering. Numerical calculations are carried out in terms of a generalized scattering probability for the total time of pulse duration including fine structure splitting and ion Doppler broadening in hot plasmas. For projected X-ray monocycles, the generalized nonlinear approach differs by 1-2 orders of magnitude from the standard theory.
Modeling for CO poisoning of a fuel cell anode
NASA Technical Reports Server (NTRS)
Dhar, H. P.; Kush, A. K.; Patel, D. N.; Christner, L. G.
1986-01-01
Poisoning losses in a half-cell in the 110-190 C temperature range have been measured in 100 wt pct H3PO4 for various mixtures of H2, CO, and CO2 gases in order to investigate the polarization loss due to poisoning by CO of a porous fuel cell Pt anode. At a fixed current density, the poisoning loss was found to vary linearly with ln of the CO/H2 concentration ratio, although deviations from linearity were noted at lower temperatures and higher current densities for high CO/H2 concentration ratios. The surface coverages of CO were also found to vary linearly with ln of the CO/H2 concentration ratio. A general adsorption relationship is derived. Standard free energies for CO adsorption were found to vary from -14.5 to -12.1 kcal/mol in the 130-190 C temperature range. The standard entropy for CO adsorption was found to be -39 cal/mol per deg K.
Dynamics of DNA/intercalator complexes
NASA Astrophysics Data System (ADS)
Schurr, J. M.; Wu, Pengguang; Fujimoto, Bryant S.
1990-05-01
Complexes of linear and supercoiled DNAs with different intercalating dyes are studied by time-resolved fluorescence polarization anisotropy using intercalated ethidium as the probe. Existing theory is generalized to take account of excitation transfer between intercalated ethidiums, and Forster theory is shown to be valid in this context. The effects of intercalated ethidium, 9-aminoacridine, and proflavine on the torsional rigidity of linear and supercoiled DNAs are studied up to rather high binding ratios. Evidence is presented that metastable secondary structure persists in dye-relaxed supercoiled DNAs, which contradicts the standard model of supercoiled DNAs.
Quantitative photoacoustic imaging in the acoustic regime using SPIM
NASA Astrophysics Data System (ADS)
Beigl, Alexander; Elbau, Peter; Sadiq, Kamran; Scherzer, Otmar
2018-05-01
While in standard photoacoustic imaging the propagation of sound waves is modeled by the standard wave equation, our approach is based on a generalized wave equation with variable sound speed and material density, respectively. In this paper we present an approach for photoacoustic imaging, which in addition to the recovery of the absorption density parameter, the imaging parameter of standard photoacoustics, also allows us to reconstruct the spatially varying sound speed and density, respectively, of the medium. We provide analytical reconstruction formulas for all three parameters based in a linearized model based on single plane illumination microscopy (SPIM) techniques.
Systems of Inhomogeneous Linear Equations
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
Analyzing Response Times in Tests with Rank Correlation Approaches
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jorg-Tobias
2013-01-01
It is common practice to log-transform response times before analyzing them with standard factor analytical methods. However, sometimes the log-transformation is not capable of linearizing the relation between the response times and the latent traits. Therefore, a more general approach to response time analysis is proposed in the current…
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
Aerodynamic characteristics of the standard dynamics model in coning motion at Mach 0.6
NASA Technical Reports Server (NTRS)
Jermey, C.; Schiff, L. B.
1985-01-01
A wind tunnel test was conducted on the Standard Dynamics Model (a simplified generic fighter aircraft shape) undergoing coning motion at Mach 0.6. Six component force and moment data are presented for a range of angle of attack, sideslip, and coning rates. At the relatively low non-dimensional coning rate employed (omega b/2V less than or equal to 0.04), the lateral aerodynamic characteristics generally show a linear variation with coning rate.
Determinants of weight gain in the action to control cardiovascular risk in diabetes trial.
Fonseca, Vivian; McDuffie, Roberta; Calles, Jorge; Cohen, Robert M; Feeney, Patricia; Feinglos, Mark; Gerstein, Hertzel C; Ismail-Beigi, Faramarz; Morgan, Timothy M; Pop-Busui, Rodica; Riddle, Matthew C
2013-08-01
Identify determinants of weight gain in people with type 2 diabetes mellitus (T2DM) allocated to intensive versus standard glycemic control in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial. We studied determinants of weight gain over 2 years in 8,929 participants (4,425 intensive arm and 4,504 standard arm) with T2DM in the ACCORD trial. We used general linear models to examine the association between each baseline characteristic and weight change at the 2-year visit. We fit a linear regression of change in weight and A1C and used general linear models to examine the association between each medication at baseline and weight change at the 2-year visit, stratified by glycemia allocation. There was significantly more weight gain in the intensive glycemia arm of the trial compared with the standard arm (3.0 ± 7.0 vs. 0.3 ± 6.3 kg). On multivariate analysis, younger age, male sex, Asian race, no smoking history, high A1C, baseline BMI of 25-35, high waist circumference, baseline insulin use, and baseline metformin use were independently associated with weight gain over 2 years. Reduction of A1C from baseline was consistently associated with weight gain only when baseline A1C was elevated. Medication usage accounted for <15% of the variability of weight change, with initiation of thiazolidinedione (TZD) use the most prominent factor. Intensive participants who never took insulin or a TZD had an average weight loss of 2.9 kg during the first 2 years of the trial. In contrast, intensive participants who had never previously used insulin or TZD but began this combination after enrolling in the ACCORD trial had a weight gain of 4.6-5.3 kg at 2 years. Weight gain in ACCORD was greater with intensive than with standard treatment and generally associated with reduction of A1C from elevated baseline values. Initiation of TZD and/or insulin therapy was the most important medication-related factor associated with weight gain.
Determination of water depth with high-resolution satellite imagery over variable bottom types
Stumpf, Richard P.; Holderied, Kristine; Sinclair, Mark
2003-01-01
A standard algorithm for determining depth in clear water from passive sensors exists; but it requires tuning of five parameters and does not retrieve depths where the bottom has an extremely low albedo. To address these issues, we developed an empirical solution using a ratio of reflectances that has only two tunable parameters and can be applied to low-albedo features. The two algorithms--the standard linear transform and the new ratio transform--were compared through analysis of IKONOS satellite imagery against lidar bathymetry. The coefficients for the ratio algorithm were tuned manually to a few depths from a nautical chart, yet performed as well as the linear algorithm tuned using multiple linear regression against the lidar. Both algorithms compensate for variable bottom type and albedo (sand, pavement, algae, coral) and retrieve bathymetry in water depths of less than 10-15 m. However, the linear transform does not distinguish depths >15 m and is more subject to variability across the studied atolls. The ratio transform can, in clear water, retrieve depths in >25 m of water and shows greater stability between different areas. It also performs slightly better in scattering turbidity than the linear transform. The ratio algorithm is somewhat noisier and cannot always adequately resolve fine morphology (structures smaller than 4-5 pixels) in water depths >15-20 m. In general, the ratio transform is more robust than the linear transform.
Competing regression models for longitudinal data.
Alencar, Airlane P; Singer, Julio M; Rocha, Francisco Marcelo M
2012-03-01
The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretest-posttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bayesian Correction for Misclassification in Multilevel Count Data Models.
Nelson, Tyler; Song, Joon Jin; Chin, Yoo-Mi; Stamey, James D
2018-01-01
Covariate misclassification is well known to yield biased estimates in single level regression models. The impact on hierarchical count models has been less studied. A fully Bayesian approach to modeling both the misclassified covariate and the hierarchical response is proposed. Models with a single diagnostic test and with multiple diagnostic tests are considered. Simulation studies show the ability of the proposed model to appropriately account for the misclassification by reducing bias and improving performance of interval estimators. A real data example further demonstrated the consequences of ignoring the misclassification. Ignoring misclassification yielded a model that indicated there was a significant, positive impact on the number of children of females who observed spousal abuse between their parents. When the misclassification was accounted for, the relationship switched to negative, but not significant. Ignoring misclassification in standard linear and generalized linear models is well known to lead to biased results. We provide an approach to extend misclassification modeling to the important area of hierarchical generalized linear models.
General relativistic corrections to the weak lensing convergence power spectrum
NASA Astrophysics Data System (ADS)
Giblin, John T.; Mertens, James B.; Starkman, Glenn D.; Zentner, Andrew R.
2017-11-01
We compute the weak lensing convergence power spectrum, Cℓκκ, in a dust-filled universe using fully nonlinear general relativistic simulations. The spectrum is then compared to more standard, approximate calculations by computing the Bardeen (Newtonian) potentials in linearized gravity and partially utilizing the Born approximation. We find corrections to the angular power spectrum amplitude of order ten percent at very large angular scales, ℓ˜2 - 3 , and percent-level corrections at intermediate angular scales of ℓ˜20 - 30 .
A Complete Multimode Equivalent-Circuit Theory for Electrical Design
Williams, Dylan F.; Hayden, Leonard A.; Marks, Roger B.
1997-01-01
This work presents a complete equivalent-circuit theory for lossy multimode transmission lines. Its voltages and currents are based on general linear combinations of standard normalized modal voltages and currents. The theory includes new expressions for transmission line impedance matrices, symmetry and lossless conditions, source representations, and the thermal noise of passive multiports. PMID:27805153
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Afshin, E-mail: a.moradi@kut.ac.ir
We develop the Maxwell-Garnett theory for the effective medium approximation of composite materials with metallic nanoparticles by taking into account the quantum spatial dispersion effects in dielectric response of nanoparticles. We derive a quantum nonlocal generalization of the standard Maxwell-Garnett formula, by means the linearized quantum hydrodynamic theory in conjunction with the Poisson equation as well as the appropriate additional quantum boundary conditions.
Spinning particle and gauge theories as integrability conditions
NASA Astrophysics Data System (ADS)
Eisenberg, Yeshayahu
1992-02-01
Starting from a new four dimensional spinning point particle we obtain new representations of the standard four dimensional gauge field equations in terms of a generalized space (Minkowski + light cone). In terms of this new formulation we define linear systems whose integrability conditions imply the massive Dirac-Maxwell and the Yang-Mills equations. Research supported by the Rothschild Fellowship.
NASA Astrophysics Data System (ADS)
Ryan, D. P.; Roth, G. S.
1982-04-01
Complete documentation of the 15 programs and 11 data files of the EPA Atomic Absorption Instrument Automation System is presented. The system incorporates the following major features: (1) multipoint calibration using first, second, or third degree regression or linear interpolation, (2) timely quality control assessments for spiked samples, duplicates, laboratory control standards, reagent blanks, and instrument check standards, (3) reagent blank subtraction, and (4) plotting of calibration curves and raw data peaks. The programs of this system are written in Data General Extended BASIC, Revision 4.3, as enhanced for multi-user, real-time data acquisition. They run in a Data General Nova 840 minicomputer under the operating system RDOS, Revision 6.2. There is a functional description, a symbol definitions table, a functional flowchart, a program listing, and a symbol cross reference table for each program. The structure of every data file is also detailed.
Cooling in the single-photon strong-coupling regime of cavity optomechanics
NASA Astrophysics Data System (ADS)
Nunnenkamp, A.; Børkje, K.; Girvin, S. M.
2012-05-01
In this Rapid Communication we discuss how red-sideband cooling is modified in the single-photon strong-coupling regime of cavity optomechanics where the radiation pressure of a single photon displaces the mechanical oscillator by more than its zero-point uncertainty. Using Fermi's golden rule we calculate the transition rates induced by the optical drive without linearizing the optomechanical interaction. In the resolved-sideband limit we find multiple-phonon cooling resonances for strong single-photon coupling that lead to nonthermal steady states including the possibility of phonon antibunching. Our study generalizes the standard linear cooling theory.
Spectral Theory of Matrices. I. General Matrices.
1980-05-01
bases u 1,...,u m and ,...,v . The rank of A - denoted by r(A) - is defined as the size of the largest minor of A(IA[aOI], a £ Q 0 E Qkn which do not...matrix. Indeed jui is invertible then U" 1 exists in the division ring F. Moreover the standard formula for U " 1 in terms of the minors of U -1...x r minor of A which contains the column b. Since b is a linear combination of the other columns of A we deduce that this minor is a linear
Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang
2014-10-01
Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.
Relativistic weak lensing from a fully non-linear cosmological density field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D.B.; Bruni, M.; Wands, D., E-mail: thomas.daniel@ucy.ac.cy, E-mail: marco.bruni@port.ac.uk, E-mail: david.wands@port.ac.uk
2015-09-01
In this paper we examine cosmological weak lensing on non-linear scales and show that there are Newtonian and relativistic contributions and that the latter can also be extracted from standard Newtonian simulations. We use the post-Friedmann formalism, a post-Newtonian type framework for cosmology, to derive the full weak-lensing deflection angle valid on non-linear scales for any metric theory of gravity. We show that the only contributing term that is quadratic in the first order deflection is the expected Born correction and lens-lens coupling term. We use this deflection angle to analyse the vector and tensor contributions to the E- andmore » B- mode cosmic shear power spectra. In our approach, once the gravitational theory has been specified, the metric components are related to the matter content in a well-defined manner. Specifying General Relativity, we write down a complete set of equations for a GR+ΛCDM universe for computing all of the possible lensing terms from Newtonian N-body simulations. We illustrate this with the vector potential and show that, in a GR+ΛCDM universe, its contribution to the E-mode is negligible with respect to that of the conventional Newtonian scalar potential, even on non-linear scales. Thus, under the standard assumption that Newtonian N-body simulations give a good approximation of the matter dynamics, we show that the standard ray tracing approach gives a good description for a ΛCDM cosmology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jaiyul
2010-10-15
We extend the general relativistic description of galaxy clustering developed in Yoo, Fitzpatrick, and Zaldarriaga (2009). For the first time we provide a fully general relativistic description of the observed matter power spectrum and the observed galaxy power spectrum with the linear bias ansatz. It is significantly different from the standard Newtonian description on large scales and especially its measurements on large scales can be misinterpreted as the detection of the primordial non-Gaussianity even in the absence thereof. The key difference in the observed galaxy power spectrum arises from the real-space matter fluctuation defined as the matter fluctuation at themore » hypersurface of the observed redshift. As opposed to the standard description, the shape of the observed galaxy power spectrum evolves in redshift, providing additional cosmological information. While the systematic errors in the standard Newtonian description are negligible in the current galaxy surveys at low redshift, correct general relativistic description is essential for understanding the galaxy power spectrum measurements on large scales in future surveys with redshift depth z{>=}3. We discuss ways to improve the detection significance in the current galaxy surveys and comment on applications of our general relativistic formalism in future surveys.« less
General Model of Photon-Pair Detection with an Image Sensor
NASA Astrophysics Data System (ADS)
Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.
2018-05-01
We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.
Multilayer neural networks for reduced-rank approximation.
Diamantaras, K I; Kung, S Y
1994-01-01
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.
Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads
NASA Astrophysics Data System (ADS)
Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank
2017-09-01
In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.
Jean, Sonia; Hudson, Marie; Gamache, Philippe; Bessette, Louis; Fortin, Paul R; Boire, Gilles; Bernatsky, Sasha
2017-12-01
Health administrative data are a potentially efficient resource to conduct population-based research and surveillance, including trends in incidence and mortality over time. Our objective was to explore time trends in incidence and mortality for rheumatoid arthritis (RA), as well as estimating period prevalence. Our RA case definition was based on one or more hospitalizations with a RA diagnosis code, or three or more RA physician-billing codes, over 2 years, with at least one RA billing code by a rheumatologist, orthopedic surgeon, or internist. To identify incident cases, a "run-in" period of 5 years (1996-2000) was used to exclude prevalent cases. Crude age and sex-specific incidence rates were calculated (using data from 2001 to 2015), and sex-specific incidence rates were also standardized to the 2001 age structure of the Quebec population. We linked the RA cohort (both prevalent and incident patients) to the vital statistics registry, and standardized mortality rate ratios were generated. Negative binomial regression was used to test for linear change in standardized incidence rates and mortality ratios. The linear trends in standardized incidence rates did not show significant change over the study period. Mortality in RA was significantly higher than the general population and this remained true throughout the study period. Our prevalence estimate suggested 0.8% of the Quebec population may be affected by RA. RA incidence appeared relatively stable, and mortality was substantially higher in RA versus the general population and remained so over the study period. This suggests the need to optimize long-term RA outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
NASA Astrophysics Data System (ADS)
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
Duality linking standard and tachyon scalar field cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avelino, P. P.; Bazeia, D.; Losano, L.
2010-09-15
In this work we investigate the duality linking standard and tachyon scalar field homogeneous and isotropic cosmologies in N+1 dimensions. We determine the transformation between standard and tachyon scalar fields and between their associated potentials, corresponding to the same background evolution. We show that, in general, the duality is broken at a perturbative level, when deviations from a homogeneous and isotropic background are taken into account. However, we find that for slow-rolling fields the duality is still preserved at a linear level. We illustrate our results with specific examples of cosmological relevance, where the correspondence between scalar and tachyon scalarmore » field models can be calculated explicitly.« less
Bowen, Stephen R; Chappell, Richard J; Bentzen, Søren M; Deveau, Michael A; Forrest, Lisa J; Jeraj, Robert
2012-01-01
Purpose To quantify associations between pre-radiotherapy and post-radiotherapy PET parameters via spatially resolved regression. Materials and methods Ten canine sinonasal cancer patients underwent PET/CT scans of [18F]FDG (FDGpre), [18F]FLT (FLTpre), and [61Cu]Cu-ATSM (Cu-ATSMpre). Following radiotherapy regimens of 50 Gy in 10 fractions, veterinary patients underwent FDG PET/CT scans at three months (FDGpost). Regression of standardized uptake values in baseline FDGpre, FLTpre and Cu-ATSMpre tumour voxels to those in FDGpost images was performed for linear, log-linear, generalized-linear and mixed-fit linear models. Goodness-of-fit in regression coefficients was assessed by R2. Hypothesis testing of coefficients over the patient population was performed. Results Multivariate linear model fits of FDGpre to FDGpost were significantly positive over the population (FDGpost~0.17 FDGpre, p=0.03), and classified slopes of RECIST non-responders and responders to be different (0.37 vs. 0.07, p=0.01). Generalized-linear model fits related FDGpre to FDGpost by a linear power law (FDGpost~FDGpre0.93, p<0.001). Univariate mixture model fits of FDGpre improved R2 from 0.17 to 0.52. Neither baseline FLT PET nor Cu-ATSM PET uptake contributed statistically significant multivariate regression coefficients. Conclusions Spatially resolved regression analysis indicates that pre-treatment FDG PET uptake is most strongly associated with three-month post-treatment FDG PET uptake in this patient population, though associations are histopathology-dependent. PMID:22682748
The linear stability of the post-Newtonian triangular equilibrium in the three-body problem
NASA Astrophysics Data System (ADS)
Yamada, Kei; Tsuchiya, Takuya
2017-12-01
Continuing a work initiated in an earlier publication (Yamada et al. in Phys Rev D 91:124016, 2015), we reexamine the linear stability of the triangular solution in the relativistic three-body problem for general masses by the standard linear algebraic analysis. In this paper, we start with the Einstein-Infeld-Hoffmann form of equations of motion for N-body systems in the uniformly rotating frame. As an extension of the previous work, we consider general perturbations to the equilibrium, i.e., we take account of perturbations orthogonal to the orbital plane, as well as perturbations lying on it. It is found that the orthogonal perturbations depend on each other by the first post-Newtonian (1PN) three-body interactions, though these are independent of the lying ones likewise the Newtonian case. We also show that the orthogonal perturbations do not affect the condition of stability. This is because these do not grow with time, but always precess with two frequency modes, namely, the same with the orbital frequency and the slightly different one due to the 1PN effect. The condition of stability, which is identical to that obtained by the previous work (Yamada et al. 2015) and is valid for the general perturbations, is obtained from the lying perturbations.
NASA Astrophysics Data System (ADS)
Lahaie, Sébastien; Parkes, David C.
We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.
LAMPAT and LAMPATNL User’s Manual
2012-09-01
nonlinearity. These tools are implemented as subroutines in the finite element software ABAQUS . This user’s manual provides information on the proper...model either through the General tab of the Edit Job dialog box in Abaqus /CAE or the command line with user=( subroutine filename). Table 1...Selection of software product and subroutine . Static Analysis With Abaqus /Standard Dynamic Analysis With Abaqus /Explicit Linear, uncoupled
Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein
2018-09-15
A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2 = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Marshall, Julian D; Apte, Joshua S; Coggins, Jay S; Goodkind, Andrew L
2015-12-15
The largest U.S. environmental health risk is cardiopulmonary mortality from ambient PM2.5. The concentration-response (C-R) for ambient PM2.5 in the U.S. is generally assumed to be linear: from any initial baseline, a given concentration reduction would yield the same improvement in health risk. Recent evidence points to the perplexing possibility that the PM2.5 C-R for cardiopulmonary mortality and some other major endpoints might be supralinear: a given concentration reduction would yield greater improvements in health risk as the initial baseline becomes cleaner. We explore the implications of supralinearity for air policy, emphasizing U.S. If C-R is supralinear, an economically efficient PM2.5 target may be substantially more stringent than under current standards. Also, if a goal of air policy is to achieve the greatest health improvement per unit of PM2.5 reduction, the optimal policy might call for greater emission reductions in already-clean locales-making "blue skies bluer"-which may be at odds with environmental equity goals. Regardless of whether the C-R is linear or supralinear, the health benefits of attaining U.S. PM2.5 levels well below the current standard would be large. For the supralinear C-R considered here, attaining the current U.S. EPA standard, 12 μg m(-3), would avert only ~17% (if C-R is linear: ∼ 25%) of the total annual cardiopulmonary mortality attributable to PM2.5.
On the LHC sensitivity for non-thermalised hidden sectors
NASA Astrophysics Data System (ADS)
Kahlhoefer, Felix
2018-04-01
We show under rather general assumptions that hidden sectors that never reach thermal equilibrium in the early Universe are also inaccessible for the LHC. In other words, any particle that can be produced at the LHC must either have been in thermal equilibrium with the Standard Model at some point or must be produced via the decays of another hidden sector particle that has been in thermal equilibrium. To reach this conclusion, we parametrise the cross section connecting the Standard Model to the hidden sector in a very general way and use methods from linear programming to calculate the largest possible number of LHC events compatible with the requirement of non-thermalisation. We find that even the HL-LHC cannot possibly produce more than a few events with energy above 10 GeV involving states from a non-thermalised hidden sector.
Flavor non-universal gauge interactions and anomalies in B-meson decays
NASA Astrophysics Data System (ADS)
Tang, Yong; Wu, Yue-Liang
2018-02-01
Motivated by flavor non-universality and anomalies in semi-leptonic B-meson decays, we present a general and systematic discussion about how to construct anomaly-free U(1)‧ gauge theories based on an extended standard model with only three right-handed neutrinos. If all standard model fermions are vector-like under this new gauge symmetry, the most general family non-universal charge assignments, (a,b,c) for three-generation quarks and (d,e,f) for leptons, need satisfy just one condition to be anomaly-free, 3(a+b+c) = - (d+e+f). Any assignment can be linear combinations of five independent anomaly-free solutions. We also illustrate how such models can generally lead to flavor-changing interactions and easily resolve the anomalies in B-meson decays. Probes with {{B}}{s} - {{\\bar B}}{s} mixing, decay into τ ±, dilepton and dijet searches at colliders are also discussed. Supported by the Grant-in-Aid for Innovative Areas (16H06490)
Flühs, Dirk; Flühs, Andrea; Ebenau, Melanie; Eichmann, Marion
2015-09-01
Dosimetric measurements in small radiation fields with large gradients, such as eye plaque dosimetry with β or low-energy photon emitters, require dosimetrically almost water-equivalent detectors with volumes of <1 mm(3) and linear responses over several orders of magnitude. Polyvinyltoluene-based scintillators fulfil these conditions. Hence, they are a standard for such applications. However, they show disadvantages with regard to certain material properties and their dosimetric behaviour towards low-energy photons. Polyethylene naphthalate, recently recognized as a scintillator, offers chemical, physical and basic dosimetric properties superior to polyvinyltoluene. Its general applicability as a clinical dosimeter, however, has not been shown yet. To prove this applicability, extensive measurements at several clinical photon and electron radiation sources, ranging from ophthalmic plaques to a linear accelerator, were performed. For all radiation qualities under investigation, covering a wide range of dose rates, a linearity of the detector response to the dose was shown. Polyethylene naphthalate proved to be a suitable detector material for the dosimetry of ophthalmic plaques, including low-energy photon emitters and other small radiation fields. Due to superior properties, it has the potential to replace polyvinyltoluene as the standard scintillator for such applications.
Limits of linearity and detection for some drugs of abuse.
Needleman, S B; Romberg, R W
1990-01-01
The limits of linearity (LOL) and detection (LOD) are important factors in establishing the reliability of an analytical procedure for accurately assaying drug concentrations in urine specimens. Multiple analyses of analyte over an extended range of concentrations provide a measure of the ability of the analytical procedure to correctly identify known quantities of drug in a biofluid matrix. Each of the seven drugs of abuse gives linear analytical responses from concentrations at or near their LOD to concentrations several-fold higher than those generally encountered in the drug screening laboratory. The upper LOL exceeds the Department of Navy (DON) cutoff values by factors of approximately 2 to 160. The LOD varies from 0.4 to 5.0% of the DON cutoff value for each drug. The limit of quantitation (LOQ) is calculated as the LOD + 7 SD. The range for LOL is greater for drugs analyzed with deuterated internal standards compared with those using conventional internal standards. For THC acid, cocaine, PCP, and morphine, LOLs are 8 to 160-fold greater than the defined cutoff concentrations. For the other drugs, the LOL's are only 2 to 4-fold greater than the defined cutoff concentrations.
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
Generalized Bregman distances and convergence rates for non-convex regularization methods
NASA Astrophysics Data System (ADS)
Grasmair, Markus
2010-11-01
We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.
The performance of projective standardization for digital subtraction radiography.
Mol, André; Dunn, Stanley M
2003-09-01
We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.
Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.
Dropkin, Greg
2016-11-24
Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.
NASA Astrophysics Data System (ADS)
Wati, S.; Fitriana, L.; Mardiyana
2018-04-01
Linear equation is one of the topics in mathematics that are considered difficult. Student difficulties of understanding linear equation can be caused by lack of understanding this concept and the way of teachers teach. TPACK is a way to understand the complex relationships between teaching and content taught through the use of specific teaching approaches and supported by the right technology tools. This study aims to identify TPACK of junior high school mathematics teachers in teaching linear equation. The method used in the study was descriptive. In the first phase, a survey using a questionnaire was carried out on 45 junior high school mathematics teachers in teaching linear equation. While in the second phase, the interview involved three teachers. The analysis of data used were quantitative and qualitative technique. The result PCK revealed teachers emphasized developing procedural and conceptual knowledge through reliance on traditional in teaching linear equation. The result of TPK revealed teachers’ lower capacity to deal with the general information and communications technologies goals across the curriculum in teaching linear equation. The result indicated that PowerPoint constitutes TCK modal technological capability in teaching linear equation. The result of TPACK seems to suggest a low standard in teachers’ technological skills across a variety of mathematics education goals in teaching linear equation. This means that the ability of teachers’ TPACK in teaching linear equation still needs to be improved.
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound
NASA Astrophysics Data System (ADS)
Li, Ruihu; Li, Xueliang; Guo, Luobin
2015-12-01
The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c
A penalized framework for distributed lag non-linear models.
Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G
2017-09-01
Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Stable orthogonal local discriminant embedding for linear dimensionality reduction.
Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin
2013-07-01
Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.
A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.
Mihalaş, Stefan; Niebur, Ernst
2009-03-01
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.
A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors
Mihalaş, Ştefan; Niebur, Ernst
2010-01-01
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368
Proof of the quantitative potential of immunofluorescence by mass spectrometry.
Toki, Maria I; Cecchi, Fabiola; Hembrough, Todd; Syrigos, Konstantinos N; Rimm, David L
2017-03-01
Protein expression in formalin-fixed, paraffin-embedded patient tissue is routinely measured by Immunohistochemistry (IHC). However, IHC has been shown to be subject to variability in sensitivity, specificity and reproducibility, and is generally, at best, considered semi-quantitative. Mass spectrometry (MS) is considered by many to be the criterion standard for protein measurement, offering high sensitivity, specificity, and objective molecular quantification. Here, we seek to show that quantitative immunofluorescence (QIF) with standardization can achieve quantitative results comparable to MS. Epidermal growth factor receptor (EGFR) was measured by quantitative immunofluorescence in 15 cell lines with a wide range of EGFR expression, using different primary antibody concentrations, including the optimal signal-to-noise concentration after quantitative titration. QIF target measurement was then compared to the absolute EGFR concentration measured by Liquid Tissue-selected reaction monitoring mass spectrometry. The best agreement between the two assays was found when the EGFR primary antibody was used at the optimal signal-to-noise concentration, revealing a strong linear regression (R 2 =0.88). This demonstrates that quantitative optimization of titration by calculation of signal-to-noise ratio allows QIF to be standardized to MS and can therefore be used to assess absolute protein concentration in a linear and reproducible manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk
We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shownmore » to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.« less
Investigating Integer Restrictions in Linear Programming
ERIC Educational Resources Information Center
Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.
2015-01-01
Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…
Perturbation theory for cosmologies with nonlinear structure
NASA Astrophysics Data System (ADS)
Goldberg, Sophia R.; Gallagher, Christopher S.; Clifton, Timothy
2017-11-01
The next generation of cosmological surveys will operate over unprecedented scales, and will therefore provide exciting new opportunities for testing general relativity. The standard method for modelling the structures that these surveys will observe is to use cosmological perturbation theory for linear structures on horizon-sized scales, and Newtonian gravity for nonlinear structures on much smaller scales. We propose a two-parameter formalism that generalizes this approach, thereby allowing interactions between large and small scales to be studied in a self-consistent and well-defined way. This uses both post-Newtonian gravity and cosmological perturbation theory, and can be used to model realistic cosmological scenarios including matter, radiation and a cosmological constant. We find that the resulting field equations can be written as a hierarchical set of perturbation equations. At leading-order, these equations allow us to recover a standard set of Friedmann equations, as well as a Newton-Poisson equation for the inhomogeneous part of the Newtonian energy density in an expanding background. For the perturbations in the large-scale cosmology, however, we find that the field equations are sourced by both nonlinear and mode-mixing terms, due to the existence of small-scale structures. These extra terms should be expected to give rise to new gravitational effects, through the mixing of gravitational modes on small and large scales—effects that are beyond the scope of standard linear cosmological perturbation theory. We expect our formalism to be useful for accurately modeling gravitational physics in universes that contain nonlinear structures, and for investigating the effects of nonlinear gravity in the era of ultra-large-scale surveys.
Inconsistent Investment and Consumption Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronborg, Morten Tolver, E-mail: mtk@atp.dk; Steffensen, Mogens, E-mail: mogens@math.ku.dk
In a traditional Black–Scholes market we develop a verification theorem for a general class of investment and consumption problems where the standard dynamic programming principle does not hold. The theorem is an extension of the standard Hamilton–Jacobi–Bellman equation in the form of a system of non-linear differential equations. We derive the optimal investment and consumption strategy for a mean-variance investor without pre-commitment endowed with labor income. In the case of constant risk aversion it turns out that the optimal amount of money to invest in stocks is independent of wealth. The optimal consumption strategy is given as a deterministic bang-bangmore » strategy. In order to have a more realistic model we allow the risk aversion to be time and state dependent. Of special interest is the case were the risk aversion is inversely proportional to present wealth plus the financial value of future labor income net of consumption. Using the verification theorem we give a detailed analysis of this problem. It turns out that the optimal amount of money to invest in stocks is given by a linear function of wealth plus the financial value of future labor income net of consumption. The optimal consumption strategy is again given as a deterministic bang-bang strategy. We also calculate, for a general time and state dependent risk aversion function, the optimal investment and consumption strategy for a mean-standard deviation investor without pre-commitment. In that case, it turns out that it is optimal to take no risk at all.« less
NASA Astrophysics Data System (ADS)
Majumdar, Paulami; Greeley, Jeffrey
2018-04-01
Linear scaling relations of adsorbate energies across a range of catalytic surfaces have emerged as a central interpretive paradigm in heterogeneous catalysis. They are, however, typically developed for low adsorbate coverages which are not always representative of realistic heterogeneous catalytic environments. Herein, we present generalized linear scaling relations on transition metals that explicitly consider adsorbate-coadsorbate interactions at variable coverages. The slopes of these scaling relations do not follow the simple bond counting principles that govern scaling on transition metals at lower coverages. The deviations from bond counting are explained using a pairwise interaction model wherein the interaction parameter determines the slope of the scaling relationship on a given metal at variable coadsorbate coverages, and the slope across different metals at fixed coadsorbate coverage is approximated by adding a coverage-dependent correction to the standard bond counting contribution. The analysis provides a compact explanation for coverage-dependent deviations from bond counting in scaling relationships and suggests a useful strategy for incorporation of coverage effects into catalytic trends studies.
NASA Astrophysics Data System (ADS)
Mimasu, Ken; Sanz, Verónica; Williams, Ciaran
2016-08-01
We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.
General methodology for simultaneous representation and discrimination of multiple object classes
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-03-01
We address a new general method for linear and nonlinear feature extraction for simultaneous representation and classification. We call this approach the maximum representation and discrimination feature (MRDF) method. We develop a novel nonlinear eigenfeature extraction technique to represent data with closed-form solutions and use it to derive a nonlinear MRDF algorithm. Results of the MRDF method on synthetic databases are shown and compared with results from standard Fukunaga-Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem and for classification and pose estimation of two similar objects under 3D aspect angle variations.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
Flühs, Dirk; Flühs, Andrea; Ebenau, Melanie; Eichmann, Marion
2015-01-01
Background Dosimetric measurements in small radiation fields with large gradients, such as eye plaque dosimetry with β or low-energy photon emitters, require dosimetrically almost water-equivalent detectors with volumes of <1 mm3 and linear responses over several orders of magnitude. Polyvinyltoluene-based scintillators fulfil these conditions. Hence, they are a standard for such applications. However, they show disadvantages with regard to certain material properties and their dosimetric behaviour towards low-energy photons. Purpose, Materials and Methods Polyethylene naphthalate, recently recognized as a scintillator, offers chemical, physical and basic dosimetric properties superior to polyvinyltoluene. Its general applicability as a clinical dosimeter, however, has not been shown yet. To prove this applicability, extensive measurements at several clinical photon and electron radiation sources, ranging from ophthalmic plaques to a linear accelerator, were performed. Results For all radiation qualities under investigation, covering a wide range of dose rates, a linearity of the detector response to the dose was shown. Conclusion Polyethylene naphthalate proved to be a suitable detector material for the dosimetry of ophthalmic plaques, including low-energy photon emitters and other small radiation fields. Due to superior properties, it has the potential to replace polyvinyltoluene as the standard scintillator for such applications. PMID:27171681
Schneiderman, Eva; Colón, Ellen L; White, Donald J; Schemehorn, Bruce; Ganovsky, Tara; Haider, Amir; Garcia-Godoy, Franklin; Morrow, Brian R; Srimaneepong, Viritpon; Chumprasert, Sujin
2017-09-01
We have previously reported on progress toward the refinement of profilometry-based abrasivity testing of dentifrices using a V8 brushing machine and tactile or optical measurement of dentin wear. The general application of this technique may be advanced by demonstration of successful inter-laboratory confirmation of the method. The objective of this study was to explore the capability of different laboratories in the assessment of dentifrice abrasivity using a profilometry-based evaluation technique developed in our Mason laboratories. In addition, we wanted to assess the interchangeability of human and bovine specimens. Participating laboratories were instructed in methods associated with Radioactive Dentin Abrasivity-Profilometry Equivalent (RDA-PE) evaluation, including site visits to discuss critical elements of specimen preparation, masking, profilometry scanning, and procedures. Laboratories were likewise instructed on the requirement for demonstration of proportional linearity as a key condition for validation of the technique. Laboratories were provided with four test dentifrices, blinded for testing, with a broad range of abrasivity. In each laboratory, a calibration curve was developed for varying V8 brushing strokes (0, 4,000, and 10,000 strokes) with the ISO abrasive standard. Proportional linearity was determined as the ratio of standard abrasion mean depths created with 4,000 and 10,000 strokes (2.5 fold differences). Criteria for successful calibration within the method (established in our Mason laboratory) was set at proportional linearity = 2.5 ± 0.3. RDA-PE was compared to Radiotracer RDA for the four test dentifrices, with the latter obtained by averages from three independent Radiotracer RDA sites. Individual laboratories and their results were compared by 1) proportional linearity and 2) acquired RDA-PE values for test pastes. Five sites participated in the study. One site did not pass proportional linearity objectives. Data for this site are not reported at the request of the researchers. Three of the remaining four sites reported herein tested human dentin and all three met proportional linearity objectives for human dentin. Three of four sites participated in testing bovine dentin and all three met the proportional linearity objectives for bovine dentin. RDA-PE values for test dentifrices were similar between sites. All four sites that met proportional linearity requirement successfully identified the dentifrice formulated above the industry standard 250 RDA (as RDA-PE). The profilometry method showed at least as good reproducibility and differentiation as Radiotracer assessments. It was demonstrated that human and bovine specimens could be used interchangeably. The standardized RDA-PE method was reproduced in multiple laboratories in this inter-laboratory study. Evidence supports that this method is a suitable technique for ISO method 11609 Annex B.
The Use of Non-Standard Devices in Finite Element Analysis
NASA Technical Reports Server (NTRS)
Schur, Willi W.; Broduer, Steve (Technical Monitor)
2001-01-01
A general mathematical description of the response behavior of thin-skin pneumatic envelopes and many other membrane and cable structures produces under-constrained systems that pose severe difficulties to analysis. These systems are mobile, and the general mathematical description exposes the mobility. Yet the response behavior of special under-constrained structures under special loadings can be accurately predicted using a constrained mathematical description. The static response behavior of systems that are infinitesimally mobile, such as a non-slack membrane subtended from a rigid or elastic boundary frame, can be easily analyzed using such general mathematical description as afforded by the non-linear, finite element method using an implicit solution scheme if the incremental uploading is guided through a suitable path. Similarly, if such structures are assembled with structural lack of fit that provides suitable self-stress, then dynamic response behavior can be predicted by the non-linear, finite element method and an implicit solution scheme. An explicit solution scheme is available for evolution problems. Such scheme can be used via the method of dynamic relaxation to obtain the solution to a static problem. In some sense, pneumatic envelopes and many other compliant structures can be said to have destiny under a specified loading system. What that means to the analyst is that what happens on the evolution path of the solution is irrelevant as long as equilibrium is achieved at destiny under full load and that the equilibrium is stable in the vicinity of that load. The purpose of this paper is to alert practitioners to the fact that non-standard procedures in finite element analysis are useful and can be legitimate although they burden their users with the requirement to use special caution. Some interesting findings that are useful to the US Scientific Balloon Program and that could not be obtained without non-standard techniques are presented.
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
Principal Curves on Riemannian Manifolds.
Hauberg, Soren
2016-09-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.
TESSIM: a simulator for the Athena-X-IFU
NASA Astrophysics Data System (ADS)
Wilms, J.; Smith, S. J.; Peille, P.; Ceballos, M. T.; Cobo, B.; Dauser, T.; Brand, T.; den Hartog, R. H.; Bandler, S. R.; de Plaa, J.; den Herder, J.-W. A.
2016-07-01
We present the design of tessim, a simulator for the physics of transition edge sensors developed in the framework of the Athena end to end simulation effort. Designed to represent the general behavior of transition edge sensors and to provide input for engineering and science studies for Athena, tessim implements a numerical solution of the linearized equations describing these devices. The simulation includes a model for the relevant noise sources and several implementations of possible trigger algorithms. Input and output of the software are standard FITS- files which can be visualized and processed using standard X-ray astronomical tool packages. Tessim is freely available as part of the SIXTE package (http://www.sternwarte.uni-erlangen.de/research/sixte/).
TESSIM: A Simulator for the Athena-X-IFU
NASA Technical Reports Server (NTRS)
Wilms, J.; Smith, S. J.; Peille, P.; Ceballos, M. T.; Cobo, B.; Dauser, T.; Brand, T.; Den Hartog, R. H.; Bandler, S. R.; De Plaa, J.;
2016-01-01
We present the design of tessim, a simulator for the physics of transition edge sensors developed in the framework of the Athena end to end simulation effort. Designed to represent the general behavior of transition edge sensors and to provide input for engineering and science studies for Athena, tessim implements a numerical solution of the linearized equations describing these devices. The simulation includes a model for the relevant noise sources and several implementations of possible trigger algorithms. Input and output of the software are standard FITS-les which can be visualized and processed using standard X-ray astronomical tool packages. Tessim is freely available as part of the SIXTE package (http:www.sternwarte.uni-erlangen.deresearchsixte).
Mafusire, Cosmas; Krüger, Tjaart P J
2018-06-01
The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.
Recommendations for fluorescence instrument qualification: the new ASTM Standard Guide.
DeRose, Paul C; Resch-Genger, Ute
2010-03-01
Aimed at improving quality assurance and quantitation for modern fluorescence techniques, ASTM International (ASTM) is about to release a Standard Guide for Fluorescence, reviewed here. The guide's main focus is on steady state fluorometry, for which available standards and instrument characterization procedures are discussed along with their purpose, suitability, and general instructions for use. These include the most relevant instrument properties needing qualification, such as linearity and spectral responsivity of the detection system, spectral irradiance reaching the sample, wavelength accuracy, sensitivity or limit of detection for an analyte, and day-to-day performance verification. With proper consideration of method-inherent requirements and limitations, many of these procedures and standards can be adapted to other fluorescence techniques. In addition, procedures for the determination of other relevant fluorometric quantities including fluorescence quantum yields and fluorescence lifetimes are briefly introduced. The guide is a clear and concise reference geared for users of fluorescence instrumentation at all levels of experience and is intended to aid in the ongoing standardization of fluorescence measurements.
Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.
Shao, Lijing
2014-03-21
The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Mode Identification of High-Amplitude Pressure Waves in Liquid Rocket Engines
NASA Astrophysics Data System (ADS)
EBRAHIMI, R.; MAZAHERI, K.; GHAFOURIAN, A.
2000-01-01
Identification of existing instability modes from experimental pressure measurements of rocket engines is difficult, specially when steep waves are present. Actual pressure waves are often non-linear and include steep shocks followed by gradual expansions. It is generally believed that interaction of these non-linear waves is difficult to analyze. A method of mode identification is introduced. After presumption of constituent modes, they are superposed by using a standard finite difference scheme for solution of the classical wave equation. Waves are numerically produced at each end of the combustion tube with different wavelengths, amplitudes, and phases with respect to each other. Pressure amplitude histories and phase diagrams along the tube are computed. To determine the validity of the presented method for steep non-linear waves, the Euler equations are numerically solved for non-linear waves, and negligible interactions between these waves are observed. To show the applicability of this method, other's experimental results in which modes were identified are used. Results indicate that this simple method can be used in analyzing complicated pressure signal measurements.
Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin
2003-04-15
A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milani, G., E-mail: gabriele.milani@polimi.it; Hanel, T.; Donetti, R.
The paper is aimed at studying the possible interaction between two different accelerators (DPG and TBBS) in the chemical kinetic of Natural Rubber (NR) vulcanized with sulphur. The same blend with several DPG and TBBS concentrations is deeply analyzed from an experimental point of view, varying the curing temperature in the range 150-180°C and obtaining rheometer curves with a step of 10°C. In order to study any possible interaction between the two accelerators –and eventually evaluating its engineering relevance-rheometer data are normalized by means of the well known Sun and Isayev normalization approach and two output parameters are assumed asmore » meaningful to have an insight into the possible interaction, namely time at maximum torque and reversion percentage. Two different numerical meta-models, which belong to the family of the so-called response surfaces RS are compared. The first is linear against TBBS and DPG and therefore well reproduces no interaction between the accelerators, whereas the latter is a non-linear RS with bilinear term. Both RS are deduced from standard best fitting of experimental data available. It is found that, generally, there is a sort of interaction between TBBS and DPG, but that the error introduced making use of a linear model (no interaction) is generally lower than 10%, i.e. fully acceptable from an engineering standpoint.« less
Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne
2016-04-01
Existing evidence suggests that ambient ultrafine particles (UFPs) (<0.1µm) may contribute to acute cardiorespiratory morbidity. However, few studies have examined the long-term health effects of these pollutants owing in part to a need for exposure surfaces that can be applied in large population-based studies. To address this need, we developed a land use regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
Quantum noncovariance of the linear potential in 1 + 1 dimensions
NASA Astrophysics Data System (ADS)
Artru, Xavier
1984-03-01
The two-body bound states governed by the Hamiltonian (pa2+ma2)12+(pb2+mb2)12+κ|xa-xb| in 1 + 1 dimensions do not have Lorentz-invariant masses (En,P2-P2)12 even to first order in P2, if one used the standard commutation relations [xi,pi]=1ℏ. This is shown explicitly for ma=mb=0 and generalized by continuity to ma+mb≠0. The same is true for any other potential V(|xa-xb|).
Naimi, Ashley I; Cole, Stephen R; Kennedy, Edward H
2017-04-01
Robins' generalized methods (g methods) provide consistent estimates of contrasts (e.g. differences, ratios) of potential outcomes under a less restrictive set of identification conditions than do standard regression methods (e.g. linear, logistic, Cox regression). Uptake of g methods by epidemiologists has been hampered by limitations in understanding both conceptual and technical details. We present a simple worked example that illustrates basic concepts, while minimizing technical complications. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
Soret motion in non-ionic binary molecular mixtures
NASA Astrophysics Data System (ADS)
Leroyer, Yves; Würger, Alois
2011-08-01
We study the Soret coefficient of binary molecular mixtures with dispersion forces. Relying on standard transport theory for liquids, we derive explicit expressions for the thermophoretic mobility and the Soret coefficient. Their sign depends on composition, the size ratio of the two species, and the ratio of Hamaker constants. Our results account for several features observed in experiment, such as a linear variation with the composition; they confirm the general rule that small molecules migrate to the warm, and large ones to the cold.
Fate of inflation and the natural reduction of vacuum energy
NASA Astrophysics Data System (ADS)
Nakamichi, Akika; Morikawa, Masahiro
2014-04-01
In the standard cosmology, an artificial fine tuning of the potential is inevitable for vanishing cosmological constant, though slow-rolling uniform scalar field easily causes cosmic inflation. We focus on the general fact that any potential with negative region can temporally halt the cosmic expansion at the end of inflation, where the field tends to diverge. This violent evolution naturally causes particle production and strong instability of the uniform configuration of the fields. Decaying of this uniform scalar field would leave vanishing cosmological constant as well as locally collapsed objects. The universe then continues to evolve into the standard Freedman model. We study the detail of the instability, based on the linear analysis, and the subsequent fate of the scalar field, based on the non-linear numerical analysis. The collapsed scalar field would easily exceed the Kaup limiting mass and forms primordial black holes, which may play an important role in galaxy formation in later stages of cosmic expansion. We systematically describe the above scenario by identifying the scalar field as the boson field condensation (BEC) and the inflation as the process of phase transition of them.
General, database-driven fast-feedback system for the Stanford Linear Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouse, F.; Allison, S.; Castillo, S.
A new feedback system has been developed for stabilizing the SLC beams at many locations. The feedback loops are designed to sample and correct at the 60 Hz repetition rate of the accelerator. Each loop can be distributed across several of the standard 80386 microprocessors which control the SLC hardware. A new communications system, KISNet, has been implemented to pass signals between the microprocessors at this rate. The software is written in a general fashion using the state space formalism of digital control theory. This allows a new loop to be implemented by just setting up the online database andmore » perhaps installing a communications link. 3 refs., 4 figs.« less
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Acoustic response variability in automotive vehicles
NASA Astrophysics Data System (ADS)
Hills, E.; Mace, B. R.; Ferguson, N. S.
2009-03-01
A statistical analysis of a series of measurements of the audio-frequency response of a large set of automotive vehicles is presented: a small hatchback model with both a three-door (411 vehicles) and five-door (403 vehicles) derivative and a mid-sized family five-door car (316 vehicles). The sets included vehicles of various specifications, engines, gearboxes, interior trim, wheels and tyres. The tests were performed in a hemianechoic chamber with the temperature and humidity recorded. Two tests were performed on each vehicle and the interior cabin noise measured. In the first, the excitation was acoustically induced by sets of external loudspeakers. In the second test, predominantly structure-borne noise was induced by running the vehicle at a steady speed on a rough roller. For both types of excitation, it is seen that the effects of temperature are small, indicating that manufacturing variability is larger than that due to temperature for the tests conducted. It is also observed that there are no significant outlying vehicles, i.e. there are at most only a few vehicles that consistently have the lowest or highest noise levels over the whole spectrum. For the acoustically excited tests, measured 1/3-octave noise reduction levels typically have a spread of 5 dB or so and the normalised standard deviation of the linear data is typically 0.1 or higher. Regarding the statistical distribution of the linear data, a lognormal distribution is a somewhat better fit than a Gaussian distribution for lower 1/3-octave bands, while the reverse is true at higher frequencies. For the distribution of the overall linear levels, a Gaussian distribution is generally the most representative. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the acoustically induced airborne cabin noise is best described by a Gaussian distribution with a normalised standard deviation between 0.09 and 0.145. There is generally considerable variability in the roller-induced noise, with individual 1/3-octave levels varying by typically 15 dB or so and with the normalised standard deviation being in the range 0.2-0.35 or more. These levels are strongly affected by wheel rim and tyre constructions. For vehicles with nominally identical wheel rims and tyres, the normalised standard deviation for 1/3-octave levels in the frequency range 40-600 Hz is 0.2 or so. The distribution of the linear roller-induced noise level in each 1/3-octave frequency band is well described by a lognormal distribution as is the overall level. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the roller-induced road noise is best described by a lognormal distribution with a normalised standard deviation of 0.2 or so, but that this can be significantly affected by the tyre and rim type, especially at lower frequencies.
Esserman, Denise A.; Moore, Charity G.; Roth, Mary T.
2009-01-01
Older community dwelling adults often take multiple medications for numerous chronic diseases. Non-adherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina based population of older adults, where each medication an individual was taking was classified as adherent or non-adherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person. PMID:20414358
Symmetry operators and decoupled equations for linear fields on black hole spacetimes
NASA Astrophysics Data System (ADS)
Araneda, Bernardo
2017-02-01
In the class of vacuum Petrov type D spacetimes with cosmological constant, which includes the Kerr-(A)dS black hole as a particular case, we find a set of four-dimensional operators that, when composed off shell with the Dirac, Maxwell and linearized gravity equations, give a system of equations for spin weighted scalars associated with the linear fields, that decouple on shell. Using these operator relations we give compact reconstruction formulae for solutions of the original spinor and tensor field equations in terms of solutions of the decoupled scalar equations. We also analyze the role of Killing spinors and Killing-Yano tensors in the spin weight zero equations and, in the case of spherical symmetry, we compare our four-dimensional formulation with the standard 2 + 2 decomposition and particularize to the Schwarzschild-(A)dS black hole. Our results uncover a pattern that generalizes a number of previous results on Teukolsky-like equations and Debye potentials for higher spin fields.
Multivariate meta-analysis for non-linear and other multi-parameter associations
Gasparrini, A; Armstrong, B; Kenward, M G
2012-01-01
In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043
Hurst, Michelle; Monahan, K Leigh; Heller, Elizabeth; Cordes, Sara
2014-11-01
When placing numbers along a number line with endpoints 0 and 1000, children generally space numbers logarithmically until around the age of 7, when they shift to a predominantly linear pattern of responding. This developmental shift of responding on the number placement task has been argued to be indicative of a shift in the format of the underlying representation of number (Siegler & Opfer, ). In the current study, we provide evidence from both child and adult participants to suggest that performance on the number placement task may not reflect the structure of the mental number line, but instead is a function of the fluency (i.e. ease) with which the individual can work with the values in the sequence. In Experiment 1, adult participants respond logarithmically when placing numbers on a line with less familiar anchors (1639 to 2897), despite linear responding on control tasks with standard anchors involving a similar range (0 to 1287) and a similar numerical magnitude (2000 to 3000). In Experiment 2, we show a similar developmental shift in childhood from logarithmic to linear responding for a non-numerical sequence with no inherent magnitude (the alphabet). In conclusion, we argue that the developmental trend towards linear behavior on the number line task is a product of successful strategy use and mental fluency with the values of the sequence, resulting from familiarity with endpoints and increased knowledge about general ordering principles of the sequence.A video abstract of this article can be viewed at:http://www.youtube.com/watch?v=zg5Q2LIFk3M. © 2014 John Wiley & Sons Ltd.
AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement
Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas
2018-06-07
In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less
Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V
2014-11-30
We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.
Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas
In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less
Linear mixed model for heritability estimation that explicitly addresses environmental variation.
Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S
2016-07-05
The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.
Otero, Raquel; Carrera, Guillem; Dulsat, Joan Francesc; Fábregas, José Luís; Claramunt, Juan
2004-11-19
A static headspace (HS) gas chromatographic method for quantitative determination of residual solvents in a drug substance has been developed according to European Pharmacopoeia general procedure. A water-dimethylformamide mixture is proposed as sample solvent to obtain good sensitivity and recovery. The standard addition technique with internal standard quantitation was used for ethanol, tetrahydrofuran and toluene determination. Validation was performed within the requirements of ICH validation guidelines Q2A and Q2B. Selectivity was tested for 36 solvents, and system suitability requirements described in the European Pharmacopoeia were checked. Limits of detection and quantitation, precision, linearity, accuracy, intermediate precision and robustness were determined, and excellent results were obtained.
Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling
2016-01-01
To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.
How does non-linear dynamics affect the baryon acoustic oscillation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugiyama, Naonori S.; Spergel, David N., E-mail: nao.s.sugiyama@gmail.com, E-mail: dns@astro.princeton.edu
2014-02-01
We study the non-linear behavior of the baryon acoustic oscillation in the power spectrum and the correlation function by decomposing the dark matter perturbations into the short- and long-wavelength modes. The evolution of the dark matter fluctuations can be described as a global coordinate transformation caused by the long-wavelength displacement vector acting on short-wavelength matter perturbation undergoing non-linear growth. Using this feature, we investigate the well known cancellation of the high-k solutions in the standard perturbation theory. While the standard perturbation theory naturally satisfies the cancellation of the high-k solutions, some of the recently proposed improved perturbation theories do notmore » guarantee the cancellation. We show that this cancellation clarifies the success of the standard perturbation theory at the 2-loop order in describing the amplitude of the non-linear power spectrum even at high-k regions. We propose an extension of the standard 2-loop level perturbation theory model of the non-linear power spectrum that more accurately models the non-linear evolution of the baryon acoustic oscillation than the standard perturbation theory. The model consists of simple and intuitive parts: the non-linear evolution of the smoothed power spectrum without the baryon acoustic oscillations and the non-linear evolution of the baryon acoustic oscillations due to the large-scale velocity of dark matter and due to the gravitational attraction between dark matter particles. Our extended model predicts the smoothing parameter of the baryon acoustic oscillation peak at z = 0.35 as ∼ 7.7Mpc/h and describes the small non-linear shift in the peak position due to the galaxy random motions.« less
Covariant conserved currents for scalar-tensor Horndeski theory
NASA Astrophysics Data System (ADS)
Schmidt, J.; Bičák, J.
2018-04-01
The scalar-tensor theories have become popular recently in particular in connection with attempts to explain present accelerated expansion of the universe, but they have been considered as a natural extension of general relativity long time ago. The Horndeski scalar-tensor theory involving four invariantly defined Lagrangians is a natural choice since it implies field equations involving at most second derivatives. Following the formalisms of defining covariant global quantities and conservation laws for perturbations of spacetimes in standard general relativity, we extend these methods to the general Horndeski theory and find the covariant conserved currents for all four Lagrangians. The current is also constructed in the case of linear perturbations involving both metric and scalar fields. As a specific illustration, we derive a superpotential that leads to the covariantly conserved current in the Branse-Dicke theory.
Nguyen, N H; Whatmore, P; Miller, A; Knibb, W
2016-02-01
The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.
Combinatorics of transformations from standard to non-standard bases in Brauer algebras
NASA Astrophysics Data System (ADS)
Chilla, Vincenzo
2007-05-01
Transformation coefficients between standard bases for irreducible representations of the Brauer centralizer algebra \\mathfrak{B}_f(x) and split bases adapted to the \\mathfrak{B}_{f_1} (x) \\times \\mathfrak{B}_{f_2} (x) \\subset \\mathfrak{B}_f (x) subalgebra (f1 + f2 = f) are considered. After providing the suitable combinatorial background, based on the definition of the i-coupling relation on nodes of the subduction grid, we introduce a generalized version of the subduction graph which extends the one given in Chilla (2006 J. Phys. A: Math. Gen. 39 7657) for symmetric groups. Thus, we can describe the structure of the subduction system arising from the linear method and give an outline of the form of the solution space. An ordering relation on the grid is also given and then, as in the case of symmetric groups, the choices of the phases and of the free factors governing the multiplicity separations are discussed.
Cookbook Recipe to Simulate Seawater Intrusion with Standard MODFLOW
NASA Astrophysics Data System (ADS)
Schaars, F.; Bakker, M.
2012-12-01
We developed a cookbook recipe to simulate steady interface flow in multi-layer coastal aquifers with regular groundwater codes such as standard MODFLOW. The main step in the recipe is a simple transformation of the hydraulic conductivities and thicknesses of the aquifers. Standard groundwater codes may be applied to compute the head distribution in the aquifer using the transformed parameters. For example, for flow in a single unconfined aquifer, the hydraulic conductivity needs to be multiplied with 41 and the base of the aquifer needs to be set to mean sea level (for a relative seawater density of 1.025). Once the head distribution is obtained, the Ghijben-Herzberg relationship is applied to compute the depth of the interface. The recipe may be applied to quite general settings, including spatially variable aquifer properties. Any standard groundwater code may be used, as long as it can simulate unconfined flow where the transmissivity is a linear function of the head. The proposed recipe is benchmarked successfully against a number of analytic and numerical solutions.
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
Christman, Stephen D; Weaver, Ryan
2008-05-01
The nature of temporal variability during speeded finger tapping was examined using linear (standard deviation) and non-linear (Lyapunov exponent) measures. Experiment 1 found that right hand tapping was characterised by lower amounts of both linear and non-linear measures of variability than left hand tapping, and that linear and non-linear measures of variability were often negatively correlated with one another. Experiment 2 found that increased non-linear variability was associated with relatively enhanced performance on a closed-loop motor task (mirror tracing) and relatively impaired performance on an open-loop motor task (pointing in a dark room), especially for left hand performance. The potential uses and significance of measures of non-linear variability are discussed.
ERIC Educational Resources Information Center
Rocconi, Louis M.
2011-01-01
Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…
Modelling non-linear effects of dark energy
NASA Astrophysics Data System (ADS)
Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis
2018-04-01
We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.
Tweedell, Andrew J.; Haynes, Courtney A.
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897
Higgs, SUSY and the standard model at /γγ colliders
NASA Astrophysics Data System (ADS)
Hagiwara, Kaoru
2001-10-01
In this report, I surveyed physics potential of the γγ option of a linear e +e - collider with the following questions in mind: What new discovery can be expected at a γγ collider in addition to what will be learned at its ' parent' e +e -linear collider? By taking account of the hard energy spectrum and polarization of colliding photons, produced by Compton back-scattering of laser light off incoming e - beams, we find that a γγ collider is most powerful when new physics appears in the neutral spin-zero channel at an invariant mass below about 80% of the c.m. energy of the colliding e -e - system. If a light Higgs boson exists, its properties can be studied in detail, and if its heavier partners or a heavy Higgs boson exists in the above mass range, they may be discovered at a γγ collider. CP property of the scalar sector can be explored in detail by making use of linear polarization of the colliding photons, decay angular correlations of final state particles, and the pattern of interference with the Standard Model amplitudes. A few comments are given for SUSY particle studies at a γγ collider, where a pair of charged spinless particles is produced in the s-wave near the threshold. Squark-onium may be discovered. An e ±γ collision mode may measure the Higgs- Z-γ coupling accurately and probe flavor oscillations in the slepton sector. As a general remark, all the Standard Model background simulation tools should be prepared in the helicity amplitude level, so that simulation can be performed for an arbitrary set of Stokes parameters of the incoming photon beams.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data
Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-01-01
Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741
NASA Astrophysics Data System (ADS)
Bünemann, Jörg; Seibold, Götz
2017-12-01
Pump-probe experiments have turned out as a powerful tool in order to study the dynamics of competing orders in a large variety of materials. The corresponding analysis of the data often relies on standard linear-response theory generalized to nonequilibrium situations. Here we examine the validity of such an approach for the charge and pairing response of systems with charge-density wave and (or) superconducting (SC) order. Our investigations are based on the attractive Hubbard model which we study within the time-dependent Hartree-Fock approximation. In particular, we calculate the quench and pump-probe dynamics for SC and charge order parameters in order to analyze the frequency spectra and the coupling of the probe field to the specific excitations. Our calculations reveal that the "linear-response assumption" is justified for small to moderate nonequilibrium situations (i.e., pump pulses) in the case of a purely charge-ordered ground state. However, the pump-probe dynamics on top of a superconducting ground state is determined by phase and amplitude modes which get coupled far from the equilibrium state indicating the failure of the linear-response assumption.
Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.
ERIC Educational Resources Information Center
Brant, Rollin
Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…
Nonlinear subdiffusive fractional equations and the aggregation phenomenon.
Fedotov, Sergei
2013-09-01
In this article we address the problem of the nonlinear interaction of subdiffusive particles. We introduce the random walk model in which statistical characteristics of a random walker such as escape rate and jump distribution depend on the mean density of particles. We derive a set of nonlinear subdiffusive fractional master equations and consider their diffusion approximations. We show that these equations describe the transition from an intermediate subdiffusive regime to asymptotically normal advection-diffusion transport regime. This transition is governed by nonlinear tempering parameter that generalizes the standard linear tempering. We illustrate the general results through the use of the examples from cell and population biology. We find that a nonuniform anomalous exponent has a strong influence on the aggregation phenomenon.
Event detection and localization for small mobile robots using reservoir computing.
Antonelo, E A; Schrauwen, B; Stroobandt, D
2008-08-01
Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the input stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.
Intrinsic problems of the gravitational baryogenesis
NASA Astrophysics Data System (ADS)
Arbuzova, E. V.; Dolgov, A. D.
2017-06-01
Modification of gravity due to the curvature dependent term in the gravitational baryogenesis scenario is considered. It is shown that this term leads to the fourth order differential equation of motion for the curvature scalar instead of the algebraic one of General Relativity (GR). The fourth order gravitational equations are generically unstable with respect to small perturbations. Non-linear in curvature terms may stabilize the solution but the magnitude of the stabilized curvature scalar would be much larger than that dictated by GR, so the standard cosmology would be strongly distorted.
TDAAPS 2: Acoustic Wave Propagation in Attenuative Moving Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph A.
This report outlines recent enhancements to the TDAAPS algorithm first described by Symons et al., 2005. One of the primary additions to the code is the ability to specify an attenuative media using standard linear fluid mechanisms to match reasonably general frequency versus loss curves, including common frequency versus loss curves for the atmosphere and seawater. Other improvements that will be described are the addition of improved numerical boundary conditions via various forms of Perfectly Matched Layers, enhanced accuracy near high contrast media interfaces, and improved physics options.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Verification of spectrophotometric method for nitrate analysis in water samples
NASA Astrophysics Data System (ADS)
Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu
2017-12-01
The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.
Dynamic Response and Residual Helmet Liner Crush Using Cadaver Heads and Standard Headforms.
Bonin, S J; Luck, J F; Bass, C R; Gardiner, J C; Onar-Thomas, A; Asfour, S S; Siegmund, G P
2017-03-01
Biomechanical headforms are used for helmet certification testing and reconstructing helmeted head impacts; however, their biofidelity and direct applicability to human head and helmet responses remain unclear. Dynamic responses of cadaver heads and three headforms and residual foam liner deformations were compared during motorcycle helmet impacts. Instrumented, helmeted heads/headforms were dropped onto the forehead region against an instrumented flat anvil at 75, 150, and 195 J. Helmets were CT scanned to quantify maximum liner crush depth and crush volume. General linear models were used to quantify the effect of head type and impact energy on linear acceleration, head injury criterion (HIC), force, maximum liner crush depth, and liner crush volume and regression models were used to quantify the relationship between acceleration and both maximum crush depth and crush volume. The cadaver heads generated larger peak accelerations than all three headforms, larger HICs than the International Organization for Standardization (ISO), larger forces than the Hybrid III and ISO, larger maximum crush depth than the ISO, and larger crush volumes than the DOT. These significant differences between the cadaver heads and headforms need to be accounted for when attempting to estimate an impact exposure using a helmet's residual crush depth or volume.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Improved Linear-Ion-Trap Frequency Standard
NASA Technical Reports Server (NTRS)
Prestage, John D.
1995-01-01
Improved design concept for linear-ion-trap (LIT) frequency-standard apparatus proposed. Apparatus contains lengthened linear ion trap, and ions processed alternately in two regions: ions prepared in upper region of trap, then transported to lower region for exposure to microwave radiation, then returned to upper region for optical interrogation. Improved design intended to increase long-term frequency stability of apparatus while reducing size, mass, and cost.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
A biconjugate gradient type algorithm on massively parallel architectures
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Hochbruck, Marlis
1991-01-01
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine.
Generalized two-dimensional chiral QED: Anomaly and exotic statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saradzhev, F.M.
1997-07-01
We study the influence of the anomaly on the physical quantum picture of the generalized chiral Schwinger model defined on S{sup 1}. We show that the anomaly (i) results in the background linearly rising electric field and (ii) makes the spectrum of the physical Hamiltonian nonrelativistic without a massive boson. The physical matter fields acquire exotic statistics. We construct explicitly the algebra of the Poincar{acute e} generators and show that it differs from the Poincar{acute e} one. We exhibit the role of the vacuum Berry phase in the failure of the Poincar{acute e} algebra to close. We prove that, inmore » spite of the background electric field, such phenomenon as the total screening of external charges characteristic for the standard Schwinger model takes place in the generalized chiral Schwinger model, too. {copyright} {ital 1997} {ital The American Physical Society}« less
Thomas, Michael L.; Kaufmann, Christopher N.; Palmer, Barton W.; Depp, Colin A.; Martin, Averria Sirkin; Glorioso, Danielle K.; Thompson, Wesley K.; Jeste, Dilip V.
2017-01-01
Objective Studies of aging usually focus on trajectories of physical and cognitive function, with far less emphasis on overall mental health, despite its impact on general health and mortality. This study examined linear and non-linear trends of physical, cognitive, and mental health over the entire adult lifespan. Method Cross-sectional data were obtained from 1,546 individuals aged 21 to 100 years, selected using random digit dialing for the Successful AGing Evaluation (SAGE) study, a structured multi-cohort investigation, that included telephone interviews and in-home surveys of community-based adults without dementia. Data were collected from 1/26/2010 to 10/07/2011 targeting participants aged 50 to 100 years, and 6/25/2012 to 7/15/2013 targeting participants aged 21 to 50 years. Data included self-report measures of physical health, measures of both positive and negative attributes of mental health, and a phone interview-based measure of cognition. Results Comparison of age cohorts using polynomial regression suggested a possible accelerated deterioration in physical and cognitive functioning, averaging one-and-a-half to two standard deviations over the adult lifespan. In contrast, there appeared to be a linear improvement of about one standard deviation in various attributes of mental health over the same life period. Conclusion These cross-sectional findings suggest the possibility of a linear improvement in mental health beginning in young adulthood rather than a U-shaped curve reported in some prior studies. Lifespan research combining psychosocial and biological markers may improve our understanding of resilience to mental disability in older age, and lead to broad-based interventions promoting mental health in all age groups. PMID:27561149
Fortini, Martina; Migliorini, Marzia; Cherubini, Chiara; Cecchi, Lorenzo; Calamai, Luca
2017-04-01
The commercial value of virgin olive oils (VOOs) strongly depends on their classification, also based on the aroma of the oils, usually evaluated by a panel test. Nowadays, a reliable analytical method is still needed to evaluate the volatile organic compounds (VOCs) and support the standard panel test method. To date, the use of HS-SPME sampling coupled to GC-MS is generally accepted for the analysis of VOCs in VOOs. However, VOO is a challenging matrix due to the simultaneous presence of: i) compounds at ppm and ppb concentrations; ii) molecules belonging to different chemical classes and iii) analytes with a wide range of molecular mass. Therefore, HS-SPME-GC-MS quantitation based upon the use of external standard method or of only a single internal standard (ISTD) for data normalization in an internal standard method, may be troublesome. In this work a multiple internal standard normalization is proposed to overcome these problems and improving quantitation of VOO-VOCs. As many as 11 ISTDs were used for quantitation of 71 VOCs. For each of them the most suitable ISTD was selected and a good linearity in a wide range of calibration was obtained. Except for E-2-hexenal, without ISTD or with an unsuitable ISTD, the linear range of calibration was narrower with respect to that obtained by a suitable ISTD, confirming the usefulness of multiple internal standard normalization for the correct quantitation of VOCs profile in VOOs. The method was validated for 71 VOCs, and then applied to a series of lampante virgin olive oils and extra virgin olive oils. In light of our results, we propose the application of this analytical approach for routine quantitative analyses and to support sensorial analysis for the evaluation of positive and negative VOOs attributes. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
British Standards Institution, London (England).
To promote interchangeability of teaching machines and programs, so that the user is not so limited in his choice of programs, the British Standards Institute has offered a standard. Part I of the standard deals with linear teaching machines and programs that make use of the roll or sheet methods of presentation. Requirements cover: spools,…
Non-Gaussian bias: insights from discrete density peaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desjacques, Vincent; Riotto, Antonio; Gong, Jinn-Ouk, E-mail: Vincent.Desjacques@unige.ch, E-mail: jinn-ouk.gong@apctp.org, E-mail: Antonio.Riotto@unige.ch
2013-09-01
Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastlymore » simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out that statistics of thresholded regions can be computed using the same formalism. Our results suggest that halo clustering statistics can be modelled consistently (in the sense that the Gaussian and non-Gaussian bias factors agree with peak-background split expectations) from a Lagrangian bias relation only if the latter is specified as a set of constraints imposed on the linear density field. This is clearly not the case of standard Lagrangian local bias. Therefore, one is led to consider additional variables beyond the local mass overdensity.« less
Jenke, Dennis; Sadain, Salma; Nunez, Karen; Byrne, Frances
2007-01-01
The performance of an ion chromatographic method for measuring citrate and phosphate in pharmaceutical solutions is evaluated. Performance characteristics examined include accuracy, precision, specificity, response linearity, robustness, and the ability to meet system suitability criteria. In general, the method is found to be robust within reasonable deviations from its specified operating conditions. Analytical accuracy is typically 100 +/- 3%, and short-term precision is not more than 1.5% relative standard deviation. The instrument response is linear over a range of 50% to 150% of the standard preparation target concentrations (12 mg/L for phosphate and 20 mg/L for citrate), and the results obtained using a single-point standard versus a calibration curve are essentially equivalent. A small analytical bias is observed and ascribed to the relative purity of the differing salts, used as raw materials in tested finished products and as reference standards in the analytical method. The assay is specific in that no phosphate or citrate peaks are observed in a variety of method-related solutions and matrix blanks (with and without autoclaving). The assay with manual preparation of the eluents is sensitive to the composition of the eluent in the sense that the eluent must be effectively degassed and protected from CO(2) ingress during use. In order for the assay to perform effectively, extensive system equilibration and conditioning is required. However, a properly conditioned and equilibrated system can be used to test a number of samples via chromatographic runs that include many (> 50) injections.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2017-08-18
The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.
Environmental standards for ionizing radiation: theoretical basis for dose-response curves.
Upton, A C
1983-01-01
The types of injury attributable to ionizing radiation are subdivided, for purposes of risk assessment and radiological protection, into two broad categories: stochastic effects and nonstochastic effects. Stochastic effects are viewed as probablistic phenomena, varying in frequency but not severity as a function of the dose, without any threshold; nonstochastic effects are viewed as deterministic phenomena, varying in both frequency and severity as a function of the dose, with clinical thresholds. Included among stochastic effects are heritable effects (mutations and chromosome aberrations) and carcinogenic effects. Both types of effects are envisioned as unicellular phenomena which can result from nonlethal injury of individual cells, without the necessity of damage to other cells. For the induction of mutations and chromosome aberrations in the low-to-intermediate dose range, the dose-response curve with high-linear energy transfer (LET) radiation generally conforms to a linear nonthreshold relationship and varies relatively little with the dose rate. In contrast, the curve with low-LET radiation generally conforms to a linear-quadratic relationship, rising less steeply than the curve with high-LET radiation and increasing in slope with increasing dose and dose rate. The dose-response curve for carcinogenic effects varies widely from one type of neoplasm to another in the intermediate-to-high dose range, in part because of differences in the way large doses of radiation can affect the promotion and progression of different neoplasms. Information about dose-response relations for low-level irradiation is fragmentary but consistent, in general, with the hypothesis that the neoplastic transformation may result from mutation, chromosome aberration or genetic recombination in a single susceptible cell. PMID:6653536
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Rahmim, Arman
2014-03-01
Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.
Coupled variational formulations of linear elasticity and the DPG methodology
NASA Astrophysics Data System (ADS)
Fuentes, Federico; Keith, Brendan; Demkowicz, Leszek; Le Tallec, Patrick
2017-11-01
This article presents a general approach akin to domain-decomposition methods to solve a single linear PDE, but where each subdomain of a partitioned domain is associated to a distinct variational formulation coming from a mutually well-posed family of broken variational formulations of the original PDE. It can be exploited to solve challenging problems in a variety of physical scenarios where stability or a particular mode of convergence is desired in a part of the domain. The linear elasticity equations are solved in this work, but the approach can be applied to other equations as well. The broken variational formulations, which are essentially extensions of more standard formulations, are characterized by the presence of mesh-dependent broken test spaces and interface trial variables at the boundaries of the elements of the mesh. This allows necessary information to be naturally transmitted between adjacent subdomains, resulting in coupled variational formulations which are then proved to be globally well-posed. They are solved numerically using the DPG methodology, which is especially crafted to produce stable discretizations of broken formulations. Finally, expected convergence rates are verified in two different and illustrative examples.
24 +24 real scalar multiplet in four dimensional N =2 conformal supergravity
NASA Astrophysics Data System (ADS)
Hegde, Subramanya; Lodato, Ivano; Sahoo, Bindusar
2018-03-01
Starting from the 48 +48 component multiplet of supercurrents for a rigid N =2 tensor multiplet in four spacetime dimensions, we obtain the transformation of the linearized supergravity multiplet which couples to this supercurrent multiplet. At the linearized level, this 48 +48 component supergravity multiplet decouples into the 24 +24 component linearized standard Weyl multiplet and a 24 +24 component irreducible matter multiplet containing a real scalar field. By a consistent application of the supersymmetry algebra with field-dependent structure constants appropriate to N =2 conformal supergravity, we find the full transformation law for this multiplet in a conformal supergravity background. By performing a suitable field redefinition, we find that the multiplet is a generalization of the flat space multiplet, obtained by Howe et al. in Nucl. Phys. B214, 519 (1983), 10.1016/0550-3213(83)90249-3, to a conformal supergravity background. We also present a set of constraints which can be consistently imposed on this multiplet to obtain a restricted minimal 8 +8 off-shell matter multiplet. We also show, as an example, the precise embedding of the tensor multiplet inside this multiplet.
Radiative transfer calculated from a Markov chain formalism
NASA Technical Reports Server (NTRS)
Esposito, L. W.; House, L. L.
1978-01-01
The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
From master slave interferometry to complex master slave interferometry: theoretical work
NASA Astrophysics Data System (ADS)
Rivet, Sylvain; Bradu, Adrian; Maria, Michael; Feuchter, Thomas; Leick, Lasse; Podoleanu, Adrian
2018-03-01
A general theoretical framework is described to obtain the advantages and the drawbacks of two novel Fourier Domain Optical Coherence Tomography (OCT) methods denoted as Master/Slave Interferometry (MSI) and its extension denoted as Complex Master/Slave Interferometry (CMSI). Instead of linearizing the digital data representing the channeled spectrum before a Fourier transform can be applied to it (as in OCT standard methods), channeled spectrum is decomposed on the basis of local oscillations. This replaces the need for linearization, generally time consuming, before any calculation of the depth profile in the range of interest. In this model two functions, g and h, are introduced. The function g describes the modulation chirp of the channeled spectrum signal due to nonlinearities in the decoding process from wavenumber to time. The function h describes the dispersion in the interferometer. The utilization of these two functions brings two major improvements to previous implementations of the MSI method. The paper details the steps to obtain the functions g and h, and represents the CMSI in a matrix formulation that enables to implement easily this method in LabVIEW by using parallel programming with multi-cores.
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.
Permutation inference for the general linear model
Winkler, Anderson M.; Ridgway, Gerard R.; Webster, Matthew A.; Smith, Stephen M.; Nichols, Thomas E.
2014-01-01
Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (glms) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on glm parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm – the “randomise” algorithm – for permutation inference with the glm. PMID:24530839
A flexible count data regression model for risk analysis.
Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P
2008-02-01
In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.
The Bayesian group lasso for confounded spatial data
Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.
2017-01-01
Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.
Onset of Plasticity via Relaxation Analysis (OPRA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandey, Amit; Wheeler, Robert; Shyam, Amit
In crystalline metals and alloys, plasticity occurs due to the movement of mobile dislocations and the yield stress for engineering applications is traditionally quantified based on strain. The onset of irreversible plasticity or “yielding” is generally identified by a deviation from linearity in the stress-strain plot or by some standard convention such as 0.2 % offset strain relative to the “linear elastic response”. In the present work, we introduce a new methodology for the determination of the true yield point based on stress relaxation. We show experimentally that this determination is self-consistent in nature and, as such, provides an objectivemore » observation of the very onset of plastic flow. Lastly, our designation for yielding is no longer related to the shape of the stress-strain curve but instead reflects the earliest signature of the activation of concerted irreversible dislocation motion in a test specimen under increasing load.« less
Entanglement-assisted quantum feedback control
NASA Astrophysics Data System (ADS)
Yamamoto, Naoki; Mikami, Tomoaki
2017-07-01
The main advantage of quantum metrology relies on the effective use of entanglement, which indeed allows us to achieve strictly better estimation performance over the standard quantum limit. In this paper, we propose an analogous method utilizing entanglement for the purpose of feedback control. The system considered is a general linear dynamical quantum system, where the control goal can be systematically formulated as a linear quadratic Gaussian control problem based on the quantum Kalman filtering method; in this setting, an entangled input probe field is effectively used to reduce the estimation error and accordingly the control cost function. In particular, we show that, in the problem of cooling an opto-mechanical oscillator, the entanglement-assisted feedback control can lower the stationary occupation number of the oscillator below the limit attainable by the controller with a coherent probe field and furthermore beats the controller with an optimized squeezed probe field.
Fusion yield: Guderley model and Tsallis statistics
NASA Astrophysics Data System (ADS)
Haubold, H. J.; Kumar, D.
2011-02-01
The reaction rate probability integral is extended from Maxwell-Boltzmann approach to a more general approach by using the pathway model introduced by Mathai in 2005 (A pathway to matrix-variate gamma and normal densities. Linear Algebr. Appl. 396, 317-328). The extended thermonuclear reaction rate is obtained in the closed form via a Meijer's G-function and the so-obtained G-function is represented as a solution of a homogeneous linear differential equation. A physical model for the hydrodynamical process in a fusion plasma-compressed and laser-driven spherical shock wave is used for evaluating the fusion energy integral by integrating the extended thermonuclear reaction rate integral over the temperature. The result obtained is compared with the standard fusion yield obtained by Haubold and John in 1981 (Analytical representation of the thermonuclear reaction rate and fusion energy production in a spherical plasma shock wave. Plasma Phys. 23, 399-411). An interpretation for the pathway parameter is also given.
On the tidal effects in the motion of artificial satellites.
NASA Technical Reports Server (NTRS)
Musen, P.; Estes, R.
1972-01-01
The general perturbations in the elliptic and vectorial elements of a satellite as caused by the tidal deformations of the non-spherical Earth are developed into trigonometric series in the standard ecliptical arguments of Hill-Brown lunar theory and in the equatorial elements of the satellite. The integration of the differential equations for variation of elements of the satellite in this theory is easy because all arguments are linear or nearly linear in time. The trigonometrical expansion permits a judgment about the relative significance of the amplitudes and periods of different tidal 'waves' over a long period of time. Graphs are presented of the tidal perturbations in the elliptic elements of the BE-C satellite which illustrate long term periodic behavior. The tidal effects are clearly noticeable in the observations and their comparison with the theory permits improvement of the 'global' Love numbers for the Earth.
Onset of Plasticity via Relaxation Analysis (OPRA)
Pandey, Amit; Wheeler, Robert; Shyam, Amit; ...
2016-03-17
In crystalline metals and alloys, plasticity occurs due to the movement of mobile dislocations and the yield stress for engineering applications is traditionally quantified based on strain. The onset of irreversible plasticity or “yielding” is generally identified by a deviation from linearity in the stress-strain plot or by some standard convention such as 0.2 % offset strain relative to the “linear elastic response”. In the present work, we introduce a new methodology for the determination of the true yield point based on stress relaxation. We show experimentally that this determination is self-consistent in nature and, as such, provides an objectivemore » observation of the very onset of plastic flow. Lastly, our designation for yielding is no longer related to the shape of the stress-strain curve but instead reflects the earliest signature of the activation of concerted irreversible dislocation motion in a test specimen under increasing load.« less
Some problems in applications of the linear variational method
NASA Astrophysics Data System (ADS)
Pupyshev, Vladimir I.; Montgomery, H. E.
2015-09-01
The linear variational method is a standard computational method in quantum mechanics and quantum chemistry. As taught in most classes, the general guidance is to include as many basis functions as practical in the variational wave function. However, if it is desired to study the patterns of energy change accompanying the change of system parameters such as the shape and strength of the potential energy, the problem becomes more complicated. We use one-dimensional systems with a particle in a rectangular or in a harmonic potential confined in an infinite rectangular box to illustrate situations where a variational calculation can give incorrect results. These situations result when the energy of the lowest eigenvalue is strongly dependent on the parameters that describe the shape and strength of the potential. The numerical examples described in this work are provided as cautionary notes for practitioners of numerical variational calculations.
Robust Combining of Disparate Classifiers Through Order Statistics
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2001-01-01
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.
NASA Astrophysics Data System (ADS)
Kuruliuk, K. A.; Kulesh, V. P.
2016-10-01
An optical videogrammetry method using one digital camera for non-contact measurements of geometric shape parameters, position and motion of models and structural elements of aircraft in experimental aerodynamics was developed. The tests with the use of this method for measurement of six components (three linear and three angular ones) of real position of helicopter device in wind tunnel flow were conducted. The distance between camera and test object was 15 meters. It was shown in practice that, in the conditions of aerodynamic experiment instrumental measurement error (standard deviation) for angular and linear displacements of helicopter device does not exceed 0,02° and 0.3 mm, respectively. Analysis of the results shows that at the minimum rotor thrust deviations are systematic and generally are within ± 0.2 degrees. Deviations of angle values grow with the increase of rotor thrust.
Towards the Fundamental Quantum Limit of Linear Measurements of Classical Signals
NASA Astrophysics Data System (ADS)
Miao, Haixing; Adhikari, Rana X.; Ma, Yiqiu; Pang, Belinda; Chen, Yanbei
2017-08-01
The quantum Cramér-Rao bound (QCRB) sets a fundamental limit for the measurement of classical signals with detectors operating in the quantum regime. Using linear-response theory and the Heisenberg uncertainty relation, we derive a general condition for achieving such a fundamental limit. When applied to classical displacement measurements with a test mass, this condition leads to an explicit connection between the QCRB and the standard quantum limit that arises from a tradeoff between the measurement imprecision and quantum backaction; the QCRB can be viewed as an outcome of a quantum nondemolition measurement with the backaction evaded. Additionally, we show that the test mass is more a resource for improving measurement sensitivity than a victim of the quantum backaction, which suggests a new approach to enhancing the sensitivity of a broad class of sensors. We illustrate these points with laser interferometric gravitational-wave detectors.
Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling
Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.
2010-01-01
Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316
Supervised Learning for Dynamical System Learning.
Hefny, Ahmed; Downey, Carlton; Gordon, Geoffrey J
2015-01-01
Recently there has been substantial interest in spectral methods for learning dynamical systems. These methods are popular since they often offer a good tradeoff between computational and statistical efficiency. Unfortunately, they can be difficult to use and extend in practice: e.g., they can make it difficult to incorporate prior information such as sparsity or structure. To address this problem, we present a new view of dynamical system learning: we show how to learn dynamical systems by solving a sequence of ordinary supervised learning problems, thereby allowing users to incorporate prior knowledge via standard techniques such as L 1 regularization. Many existing spectral methods are special cases of this new framework, using linear regression as the supervised learner. We demonstrate the effectiveness of our framework by showing examples where nonlinear regression or lasso let us learn better state representations than plain linear regression does; the correctness of these instances follows directly from our general analysis.
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
Cosmological Ohm's law and dynamics of non-minimal electromagnetism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollenstein, Lukas; Jain, Rajeev Kumar; Urban, Federico R., E-mail: lukas.hollenstein@cea.fr, E-mail: jain@cp3.dias.sdu.dk, E-mail: furban@ulb.ac.be
2013-01-01
The origin of large-scale magnetic fields in cosmic structures and the intergalactic medium is still poorly understood. We explore the effects of non-minimal couplings of electromagnetism on the cosmological evolution of currents and magnetic fields. In this context, we revisit the mildly non-linear plasma dynamics around recombination that are known to generate weak magnetic fields. We use the covariant approach to obtain a fully general and non-linear evolution equation for the plasma currents and derive a generalised Ohm law valid on large scales as well as in the presence of non-minimal couplings to cosmological (pseudo-)scalar fields. Due to the sizeablemore » conductivity of the plasma and the stringent observational bounds on such couplings, we conclude that modifications of the standard (adiabatic) evolution of magnetic fields are severely limited in these scenarios. Even at scales well beyond a Mpc, any departure from flux freezing behaviour is inhibited.« less
Pinho, Teresa; Bellot-Arcís, Carlos; Montiel-Company, José María; Neves, Manuel
2015-07-01
The aim of this study was to determine the dental esthetic perception of the smile of patients with maxillary lateral incisor agenesis (MLIA); the perceptions were examined pre- and post-treatment. Esthetic determinations were made with regard to the gingival exposure in the patients' smile by orthodontists, general dentists, and laypersons. Three hundred eighty one people (80 orthodontists, 181 general dentists, 120 laypersons) rated the attractiveness of the smile in four cases before and after treatment, comprising two cases with unilateral MLIA and contralateral microdontia and two with bilateral MLIA. For each case, the buccal photograph was adjusted using a computer to apply standard lips to create high, medium, and low smiles. A numeric scale was used to measure the esthetic rating perceived by the judges. The resulting arithmetic means were compared using an ANOVA test, a linear trend, and a Student's t-test, applying a significance level of p < 0.05. The predictive capability of the variables, unilateral, or bilateral MLIA, symmetry of the treatment, gingival exposure of the smile, group, and gender were assessed using a multivariable linear regression model. In the pre- and post-treatment cases, medium smile photographs received higher scores than the same cases with high or low smiles, with significant differences between them. In all cases, orthodontists were the least-tolerant evaluation group (assigning lowest scores), followed by general dentists. In a predictive linear regression model, bilateral MLIA was the more predictive variable in pretreatment cases. The gingival exposure of the smile was a predictive variable in post-treatment cases only. The medium-height smile was considered to be more attractive. In all cases, orthodontists gave the lowest scores, followed by general dentists. Laypersons and male evaluators gave the highest scores. Symmetrical treatments scored higher than asymmetrical treatments. The gingival exposure had a significant influence on the esthetic perception of smiles in post-treatment cases. © 2014 by the American College of Prosthodontists.
NASA Astrophysics Data System (ADS)
He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong
2016-09-01
We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.
Mössler, Karin; Gold, Christian; Aßmus, Jörg; Schumacher, Karin; Calvet, Claudine; Reimer, Silke; Iversen, Gun; Schmid, Wolfgang
2017-09-21
This study examined whether the therapeutic relationship in music therapy with children with Autism Spectrum Disorder predicts generalized changes in social skills. Participants (4-7 years, N = 48) were assessed at baseline, 5 and 12 months. The therapeutic relationship, as observed from session videos, and the generalized change in social skills, as judged by independent blinded assessors and parents, were evaluated using standardized tools (Assessment of the Quality of Relationship; ADOS; SRS). Linear mixed effect models showed significant interaction effects between the therapeutic relationship and several outcomes at 5 and 12 months. We found the music therapeutic relationship to be an important predictor of the development of social skills, as well as communication and language specifically.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Trends in asthma mortality in the 0- to 4-year and 5- to 34-year age groups in Brazil
Graudenz, Gustavo Silveira; Carneiro, Dominique Piacenti; Vieira, Rodolfo de Paula
2017-01-01
ABSTRACT Objective: To provide an update on trends in asthma mortality in Brazil for two age groups: 0-4 years and 5-34 years. Methods: Data on mortality from asthma, as defined in the International Classification of Diseases, were obtained for the 1980-2014 period from the Mortality Database maintained by the Information Technology Department of the Brazilian Unified Health Care System. To analyze time trends in standardized asthma mortality rates, we conducted an ecological time-series study, using regression models for the 0- to 4-year and 5- to 34-year age groups. Results: There was a linear trend toward a decrease in asthma mortality in both age groups, whereas there was a third-order polynomial fit in the general population. Conclusions: Although asthma mortality showed a consistent, linear decrease in individuals ≤ 34 years of age, the rate of decline was greater in the 0- to 4-year age group. The 5- to 34-year group also showed a linear decline in mortality, and the rate of that decline increased after the year 2004, when treatment with inhaled corticosteroids became more widely available. The linear decrease in asthma mortality found in both age groups contrasts with the nonlinear trend observed in the general population of Brazil. The introduction of inhaled corticosteroid use through public policies to control asthma coincided with a significant decrease in asthma mortality rates in both subsets of individuals over 5 years of age. The causes of this decline in asthma-related mortality in younger age groups continue to constitute a matter of debate. PMID:28380185
Aortic dimensions in Turner syndrome.
Quezada, Emilio; Lapidus, Jodi; Shaughnessy, Robin; Chen, Zunqiu; Silberbach, Michael
2015-11-01
In Turner syndrome, linear growth is less than the general population. Consequently, to assess stature in Turner syndrome, condition-specific comparators have been employed. Similar reference curves for cardiac structures in Turner syndrome are currently unavailable. Accurate assessment of the aorta is particularly critical in Turner syndrome because aortic dissection and rupture occur more frequently than in the general population. Furthermore, comparisons to references calculated from the taller general population with the shorter Turner syndrome population can lead to over-estimation of aortic size causing stigmatization, medicalization, and potentially over-treatment. We used echocardiography to measure aortic diameters at eight levels of the thoracic aorta in 481 healthy girls and women with Turner syndrome who ranged in age from two to seventy years. Univariate and multivariate linear regression analyses were performed to assess the influence of karyotype, age, body mass index, bicuspid aortic valve, blood pressure, history of renal disease, thyroid disease, or growth hormone therapy. Because only bicuspid aortic valve was found to independently affect aortic size, subjects with bicuspid aortic valve were excluded from the analysis. Regression equations for aortic diameters were calculated and Z-scores corresponding to 1, 2, and 3 standard deviations from the mean were plotted against body surface area. The information presented here will allow clinicians and other caregivers to calculate aortic Z-scores using a Turner-based reference population. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Generating Linear Equations Based on Quantitative Reasoning
ERIC Educational Resources Information Center
Lee, Mi Yeon
2017-01-01
The Common Core's Standards for Mathematical Practice encourage teachers to develop their students' ability to reason abstractly and quantitatively by helping students make sense of quantities and their relationships within problem situations. The seventh-grade content standards include objectives pertaining to developing linear equations in…
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
Chiu, Huai-Hsuan; Liao, Hsiao-Wei; Shao, Yu-Yun; Lu, Yen-Shen; Lin, Ching-Hung; Tsai, I-Lin; Kuo, Ching-Hua
2018-08-17
Monoclonal antibody (mAb) drugs have generated much interest in recent years for treating various diseases. Immunoglobulin G (IgG) represents a high percentage of mAb drugs that have been approved by the Food and Drug Administration (FDA). To facilitate therapeutic drug monitoring and pharmacokinetic/pharmacodynamic studies, we developed a general liquid chromatography-tandem mass spectrometry (LC-MS/MS) method to quantify the concentration of IgG-based mAbs in human plasma. Three IgG-based drugs (bevacizumab, nivolumab and pembrolizumab) were selected to demonstrate our method. Protein G beads were used for sample pretreatment due to their universal ability to trap IgG-based drugs. Surrogate peptides that were obtained after trypsin digestion were quantified by using LC-MS/MS. To calibrate sample preparation errors and matrix effects that occur during LC-MS/MS analysis, we used two internal standards (IS) method that include the IgG-based drug-IS tocilizumab and post-column infused IS. Using two internal standards was found to effectively improve quantification accuracy, which was within 15% for all mAb drugs that were tested at three different concentrations. This general method was validated in term of its precision, accuracy, linearity and sensitivity for 3 demonstration mAb drugs. The successful application of the method to clinical samples demonstrated its' applicability in clinical analysis. It is anticipated that this general method could be applied to other mAb-based drugs for use in precision medicine and clinical studies. Copyright © 2018 Elsevier B.V. All rights reserved.
Comín-Colet, Josep; Anguita, Manuel; Formiga, Francesc; Almenar, Luis; Crespo-Leiro, María G; Manzano, Luis; Muñiz, Javier; Chaves, José; de Frutos, Trinidad; Enjuanes, Cristina
2016-03-01
Although heart failure negatively affects the health-related quality of life of Spanish patients there is little information on the clinical factors associated with this issue. Cross-sectional multicenter study of health-related quality of life. A specific questionnaire (Kansas City Cardiomyopathy Questionnaire) and a generic questionnaire (EuroQoL-5D) were administered to 1037 consecutive outpatients with systolic heart failure. Most patients with poor quality of life had a worse prognosis and increased severity of heart failure. Mobility was more limited and rates of pain/discomfort and anxiety/depression were higher in the study patients than in the general population and patients with other chronic conditions. The scores on both questionnaires were very highly correlated (Pearson r =0.815; P < .001). Multivariable linear regression showed that being older (standardized β=-0.2; P=.03), female (standardized β=-10.3; P < .001), having worse functional class (standardized β=-20.4; P < .001), a higher Charlson comorbidity index (standardized β=-1.2; P=.005), and recent hospitalization for heart failure (standardized β=6.28; P=.006) were independent predictors of worse health-related quality of life. Patients with heart failure have worse quality of life than the general Spanish population and patients with other chronic diseases. Female sex, being older, comorbidity, advanced symptoms, and recent hospitalization are determinant factors in health-related quality of life in these patients. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
The well-posedness of the Kuramoto-Sivashinsky equation
NASA Technical Reports Server (NTRS)
Tadmor, E.
1984-01-01
The Kuramoto-Sivashinsky equation arises in a variety of applications, among which are modeling reaction diffusion systems, flame propagation and viscous flow problems. It is considered here, as a prototype to the larger class of generalized Burgers equations: those consist of a quadratic nonlinearity and an arbitrary linear parabolic part. It is shown that such equations are well posed, thus admitting a unique smooth solution, continuously dependent on its initial data. As an attractive alternative to standard energy methods, existence and stability are derived in this case, by patching in the large short time solutions without loss of derivatives.
The well-posedness of the Kuramoto-Sivashinsky equation
NASA Technical Reports Server (NTRS)
Tadmor, E.
1986-01-01
The Kuramoto-Sivashinsky equation arises in a variety of applications, among which are modeling reaction diffusion systems, flame propagation and viscous flow problems. It is considered here, as a prototype to the larger class of generalized Burgers equations: those consist of a quadratic nonlinearity and an arbitrary linear parabolic part. It is shown that such equations are well posed, thus admitting a unique smooth solution, continuously dependent on its initial data. As an attractive alternative to standard energy methods, existence and stability are derived in this case, by patching in the large short time solutions without 'loss of derivatives'.
Viscoelastic analysis of adhesively bonded joints
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1980-01-01
An adhesively bonded lap joint is analyzed by assuming that the adherends are elastic and the adhesive is linearly viscoelastic. After formulating the general problem a specific example for two identical adherends bonded through a three parameter viscoelastic solid adhesive is considered. The standard Laplace transform technique is used to solve the problem. The stress distribution in the adhesive layer is calculated for three different external loads, namely, membrane loading, bending, and transverse shear loading. The results indicate that the peak value of the normal stress in the adhesive is not only consistently higher than the corresponding shear stress but also decays slower.
Projection of two biphoton qutrits onto a maximally entangled state.
Halevy, A; Megidish, E; Shacham, T; Dovrat, L; Eisenberg, H S
2011-04-01
Bell state measurements, in which two quantum bits are projected onto a maximally entangled state, are an essential component of quantum information science. We propose and experimentally demonstrate the projection of two quantum systems with three states (qutrits) onto a generalized maximally entangled state. Each qutrit is represented by the polarization of a pair of indistinguishable photons-a biphoton. The projection is a joint measurement on both biphotons using standard linear optics elements. This demonstration enables the realization of quantum information protocols with qutrits, such as teleportation and entanglement swapping. © 2011 American Physical Society
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
Firth, Joseph; Stubbs, Brendon; Vancampfort, Davy; Firth, Josh A; Large, Matthew; Rosenbaum, Simon; Hallgren, Mats; Ward, Philip B; Sarris, Jerome; Yung, Alison R
2018-06-06
Handgrip strength may provide an easily-administered marker of cognitive functional status. However, further population-scale research examining relationships between grip strength and cognitive performance across multiple domains is needed. Additionally, relationships between grip strength and cognitive functioning in people with schizophrenia, who frequently experience cognitive deficits, has yet to be explored. Baseline data from the UK Biobank (2007-2010) was analyzed; including 475397 individuals from the general population, and 1162 individuals with schizophrenia. Linear mixed models and generalized linear mixed models were used to assess the relationship between grip strength and 5 cognitive domains (visual memory, reaction time, reasoning, prospective memory, and number memory), controlling for age, gender, bodyweight, education, and geographical region. In the general population, maximal grip strength was positively and significantly related to visual memory (coefficient [coeff] = -0.1601, standard error [SE] = 0.003), reaction time (coeff = -0.0346, SE = 0.0004), reasoning (coeff = 0.2304, SE = 0.0079), number memory (coeff = 0.1616, SE = 0.0092), and prospective memory (coeff = 0.3486, SE = 0.0092: all P < .001). In the schizophrenia sample, grip strength was strongly related to visual memory (coeff = -0.155, SE = 0.042, P < .001) and reaction time (coeff = -0.049, SE = 0.009, P < .001), while prospective memory approached statistical significance (coeff = 0.233, SE = 0.132, P = .078), and no statistically significant association was found with number memory and reasoning (P > .1). Grip strength is significantly associated with cognitive functioning in the general population and individuals with schizophrenia, particularly for working memory and processing speed. Future research should establish directionality, examine if grip strength also predicts functional and physical health outcomes in schizophrenia, and determine whether interventions which improve muscular strength impact on cognitive and real-world functioning.
Serum osteoprotegerin and renal function in the general population: the Tromsø Study.
Vik, Anders; Brodin, Ellen E; Mathiesen, Ellisiv B; Brox, Jan; Jørgensen, Lone; Njølstad, Inger; Brækkan, Sigrid K; Hansen, John-Bjarne
2017-02-01
Serum osteoprotegerin (OPG) is elevated in patients with chronic kidney disease (CKD) and increases with decreasing renal function. However, there are limited data regarding the association between OPG and renal function in the general population. The aim of the present study was to explore the relation between serum OPG and renal function in subjects recruited from the general population. We conducted a cross-sectional study with 6689 participants recruited from the general population in Tromsø, Norway. Estimated glomerular filtration rate (eGFR) was calculated using the Chronic Kidney Disease Epidemiology Collaboration equations. OPG was modelled both as a continuous and categorical variable. General linear models and linear regression with adjustment for possible confounders were used to study the association between OPG and eGFR. Analyses were stratified by the median age, as serum OPG and age displayed a significant interaction on eGFR. In participants ≤62.2 years with normal renal function (eGFR ≥90 mL/min/1.73 m 2 ) eGFR increased by 0.35 mL/min/1.73 m 2 (95% CI 0.13-0.56) per 1 standard deviation (SD) increase in serum OPG after multiple adjustment. In participants older than the median age with impaired renal function (eGFR <90 mL/min/1.73 m 2 ), eGFR decreased by 1.54 (95% CI -2.06 to -1.01) per 1 SD increase in serum OPG. OPG was associated with an increased eGFR in younger subjects with normal renal function and with a decreased eGFR in older subjects with reduced renal function. Our findings imply that the association between OPG and eGFR varies with age and renal function.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yavari, M., E-mail: yavari@iaukashan.ac.ir
2016-06-15
We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.
Koda, Shin-ichi
2015-05-28
It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.
Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P
2017-06-13
We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.
Fukuda, Hiromu; Maunder, Mark N.
2017-01-01
Catch-per-unit-effort (CPUE) is often the main piece of information used in fisheries stock assessment; however, the catch and effort data that are traditionally compiled from commercial logbooks can be incomplete or unreliable due to many reasons. Pacific bluefin tuna (PBF) is a seasonal target species in the Taiwanese longline fishery. Since 2010, detailed catch information for each PBF has been made available through a catch documentation scheme. However, previously, only market landing data with a low coverage of logbooks were available. Therefore, several nontraditional procedures were performed to reconstruct catch and effort data from many alternative data sources not directly obtained from fishers for 2001–2015: (1) Estimating the catch number from the landing weight for 2001–2003, for which the catch number information was incomplete, based on Monte Carlo simulation; (2) deriving fishing days for 2007–2009 from voyage data recorder data, based on a newly developed algorithm; and (3) deriving fishing days for 2001–2006 from vessel trip information, based on linear relationships between fishing and at-sea days. Subsequently, generalized linear mixed models were developed with the delta-lognormal assumption for standardizing the CPUE calculated from the reconstructed data, and three-stage model evaluation was performed using (1) Akaike and Bayesian information criteria to determine the most favorable variable composition of standardization models, (2) overall R2 via cross-validation to compare fitting performance between area-separated and area-combined standardizations, and (3) system-based testing to explore the consistency of the standardized CPUEs with auxiliary data in the PBF stock assessment model. The last stage of evaluation revealed high consistency among the data, thus demonstrating improvements in data reconstruction for estimating the abundance index, and consequently the stock assessment. PMID:28968434
Feng, L; Hua, C; Sun, H; Qin, L-Y; Niu, P-P; Guo, Z-N; Yang, Y
2018-01-01
To investigate the association between serum uric acid level and the presence and progression of carotid atherosclerosis in Chinese individuals aged 75 years or older. Case-control study. In a teaching hospital. Five hundred and sixty-four elderlies (75 years or above) who underwent general health screening in our hospital were enrolled. The detailed carotid ultrasound results, physical examination information, medical history, and laboratory test results including serum uric acid level were recorded, these data were used to analyze the relationship between serum uric acid level and carotid atherosclerosis. Then, subjects who underwent the second carotid ultrasound 1.5-2 years later were further identified to analyzed the relationship between serum uric acid and the progression of carotid atherosclerosis. A total of 564 subjects were included, carotid plaque was found in 482 (85.5%) individuals. Logistic regression showed that subjects with elevated serum uric acid (expressed per 1 standard deviation change) had significantly higher incidence of carotid plaque (odds ratio, 1.37; 95% confidence interval, 1.07-1.75; P= 0.012) after controlling for other factors. A total of 236 subjects underwent the follow-up carotid ultrasound. Linear regression showed that serum uric acid level (expressed per 1 standard deviation change; 1 standard deviation = 95.5 μmol/L) was significantly associated with percentage of change of plaque score (P = 0.008). Multivariable linear regression showed that 1 standard deviation increase in serum uric acid levels was expected to increase 0.448% of plaque score (P = 0.023). The elevated serum uric acid level may be independently and significantly associated with the presence and progression of carotid atherosclerosis in Chinese individuals aged 75 years or older.
Bennett, Herbert S.; Andres, Howard; Pellegrino, Joan; Kwok, Winnie; Fabricius, Norbert; Chapin, J. Thomas
2009-01-01
In 2008, the National Institute of Standards and Technology and Energetics Incorporated collaborated with the International Electrotechnical Commission Technical Committee 113 (IEC TC 113) on nano-electrotechnologies to survey members of the international nanotechnologies community about priorities for standards and measurements to accelerate innovations in nano-electrotechnologies. In this paper, we analyze the 459 survey responses from 45 countries as one means to begin building a consensus on a framework leading to nano-electrotechnologies standards development by standards organizations and national measurement institutes. The distributions of priority rankings from all 459 respondents are such that there are perceived distinctions with statistical confidence between the relative international priorities for the several items ranked in each of the following five Survey category types: 1) Nano-electrotechnology Properties, 2) Nano-electrotechnology Taxonomy: Products, 3) Nano-electrotechnology Taxonomy: Cross-Cutting Technologies, 4) IEC General Discipline Areas, and 5) Stages of the Linear Economic Model. The global consensus prioritizations for ranked items in the above five category types suggest that the IEC TC 113 should focus initially on standards and measurements for electronic and electrical properties of sensors and fabrication tools that support performance assessments of nano-technology enabled sub-assemblies used in energy, medical, and computer products. PMID:27504216
The virialization density of peaks with general density profiles under spherical collapse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubin, Douglas; Loeb, Abraham, E-mail: dsrubin@physics.harvard.edu, E-mail: aloeb@cfa.harvard.edu
2013-12-01
We calculate the non-linear virialization density, Δ{sub c}, of halos under spherical collapse from peaks with an arbitrary initial and final density profile. This is in contrast to the standard calculation of Δ{sub c} which assumes top-hat profiles. Given our formalism, the non-linear halo density can be calculated once the shape of the initial peak's density profile and the shape of the virialized halo's profile are provided. We solve for Δ{sub c} for halos in an Einstein de-Sitter and a ΛCDM universe. As examples, we consider power-law initial profiles as well as spherically averaged peak profiles calculated from the statisticsmore » of a Gaussian random field. We find that, depending on the profiles used, Δ{sub c} is smaller by a factor of a few to as much as a factor of 10 as compared to the density given by the standard calculation ( ≈ 200). Using our results, we show that, for halo finding algorithms that identify halos through an over-density threshold, the halo mass function measured from cosmological simulations can be enhanced at all halo masses by a factor of a few. This difference could be important when using numerical simulations to assess the validity of analytic models of the halo mass function.« less
Pütter, Carolin; Pechlivanis, Sonali; Nöthen, Markus M; Jöckel, Karl-Heinz; Wichmann, Heinz-Erich; Scherag, André
2011-01-01
Genome-wide association studies have identified robust associations between single nucleotide polymorphisms and complex traits. As the proportion of phenotypic variance explained is still limited for most of the traits, larger and larger meta-analyses are being conducted to detect additional associations. Here we investigate the impact of the study design and the underlying assumption about the true genetic effect in a bimodal mixture situation on the power to detect associations. We performed simulations of quantitative phenotypes analysed by standard linear regression and dichotomized case-control data sets from the extremes of the quantitative trait analysed by standard logistic regression. Using linear regression, markers with an effect in the extremes of the traits were almost undetectable, whereas analysing extremes by case-control design had superior power even for much smaller sample sizes. Two real data examples are provided to support our theoretical findings and to explore our mixture and parameter assumption. Our findings support the idea to re-analyse the available meta-analysis data sets to detect new loci in the extremes. Moreover, our investigation offers an explanation for discrepant findings when analysing quantitative traits in the general population and in the extremes. Copyright © 2011 S. Karger AG, Basel.
Skriver, Mette Vinther; Væth, Michael; Støvring, Henrik
2018-01-01
The standardized mortality ratio (SMR) is a widely used measure. A recent methodological study provided an accurate approximate relationship between an SMR and difference in lifetime expectancies. This study examines the usefulness of the theoretical relationship, when comparing historic mortality data in four Scandinavian populations. For Denmark, Finland, Norway and Sweden, data on mortality every fifth year in the period 1950 to 2010 were obtained. Using 1980 as the reference year, SMRs and difference in life expectancy were calculated. The assumptions behind the theoretical relationship were examined graphically. The theoretical relationship predicts a linear association with a slope, [Formula: see text], between log(SMR) and difference in life expectancies, and the theoretical prediction and calculated differences in lifetime expectancies were compared. We examined the linear association both for life expectancy at birth and at age 30. All analyses were done for females, males and the total population. The approximate relationship provided accurate predictions of actual differences in lifetime expectancies. The accuracy of the predictions was better when age was restricted to above 30, and improved if the changes in mortality rate were close to a proportional change. Slopes of the linear relationship were generally around 9 for females and 10 for males. The theoretically derived relationship between SMR and difference in life expectancies provides an accurate prediction for comparing populations with approximately proportional differences in mortality, and was relatively robust. The relationship may provide a useful prediction of differences in lifetime expectancies, which can be more readily communicated and understood.
NASA Astrophysics Data System (ADS)
Majidinejad, A.; Zafarani, H.; Vahdani, S.
2018-05-01
The North Tehran fault (NTF) is known to be one of the most drastic sources of seismic hazard on the city of Tehran. In this study, we provide broad-band (0-10 Hz) ground motions for the city as a consequence of probable M7.2 earthquake on the NTF. Low-frequency motions (0-2 Hz) are provided from spectral element dynamic simulation of 17 scenario models. High-frequency (2-10 Hz) motions are calculated with a physics-based method based on S-to-S backscattering theory. Broad-band ground motions at the bedrock level show amplifications, both at low and high frequencies, due to the existence of deep Tehran basin in the vicinity of the NTF. By employing soil profiles obtained from regional studies, effect of shallow soil layers on broad-band ground motions is investigated by both linear and non-linear analyses. While linear soil response overestimate ground motion prediction equations, non-linear response predicts plausible results within one standard deviation of empirical relationships. Average Peak Ground Accelerations (PGAs) at the northern, central and southern parts of the city are estimated about 0.93, 0.59 and 0.4 g, respectively. Increased damping caused by non-linear soil behaviour, reduces the soil linear responses considerably, in particular at frequencies above 3 Hz. Non-linear deamplification reduces linear spectral accelerations up to 63 per cent at stations above soft thick sediments. By performing more general analyses, which exclude source-to-site effects on stations, a correction function is proposed for typical site classes of Tehran. Parameters for the function which reduces linear soil response in order to take into account non-linear soil deamplification are provided for various frequencies in the range of engineering interest. In addition to fully non-linear analyses, equivalent-linear calculations were also conducted which their comparison revealed appropriateness of the method for large peaks and low frequencies, but its shortage for small to medium peaks and motions with higher than 3 Hz frequencies.
Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.
Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray
2017-07-11
Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Signal location using generalized linear constraints
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.; Feldman, D. D.
1992-01-01
This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.
Generalized Case ``Van Kampen theory for electromagnetic oscillations in a magnetized plasma
NASA Astrophysics Data System (ADS)
Bairaktaris, F.; Hizanidis, K.; Ram, A. K.
2017-10-01
The Case-Van Kampen theory is set up to describe electrostatic oscillations in an unmagnetized plasma. Our generalization to electromagnetic oscillations in magnetized plasma is formulated in the relativistic position-momentum phase space of the particles. The relativistic Vlasov equation includes the ambient, homogeneous, magnetic field, and space-time dependent electromagnetic fields that satisfy Maxwell's equations. The standard linearization technique leads to an equation for the perturbed distribution function in terms of the electromagnetic fields. The eigenvalues and eigenfunctions are obtained from three integrals `` each integral being over two different components of the momentum vector. Results connecting phase velocity, frequency, and wave vector will be presented. Supported in part by the Hellenic National Programme on Controlled Thermonuclear Fusion associated with the EUROfusion Consortium, and by DoE Grant DE-FG02-91ER-54109.
On statistical inference in time series analysis of the evolution of road safety.
Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora
2013-11-01
Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-01-01
A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.
Evaluation of a Nonlinear Finite Element Program - ABAQUS.
1983-03-15
anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has
On the classical and quantum integrability of systems of resonant oscillators
NASA Astrophysics Data System (ADS)
Marino, Massimo
2017-01-01
We study in this paper systems of harmonic oscillators with resonant frequencies. For these systems we present general procedures for the construction of sets of functionally independent constants of motion, which can be used for the definition of generalized actionangle variables, in accordance with the general description of degenerate integrable systems which was presented by Nekhoroshev in a seminal paper in 1972. We then apply to these classical integrable systems the procedure of quantization which has been proposed to the author by Nekhoroshev during his last years of activity at Milan University. This procedure is based on the construction of linear operators by means of the symmetrization of the classical constants of motion mentioned above. For 3 oscillators with resonance 1: 1: 2, by using a computer program we have discovered an exceptional integrable system, which cannot be obtained with the standard methods based on the obvious symmetries of the Hamiltonian function. In this exceptional case, quantum integrability can be realized only by means of a modification of the symmetrization procedure.
Linear discrete systems with memory: a generalization of the Langmuir model
NASA Astrophysics Data System (ADS)
Băleanu, Dumitru; Nigmatullin, Raoul R.
2013-10-01
In this manuscript we analyzed a general solution of the linear nonlocal Langmuir model within time scale calculus. Several generalizations of the Langmuir model are presented together with their exact corresponding solutions. The physical meaning of the proposed models are investigated and their corresponding geometries are reported.
A highly parallel multigrid-like method for the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Tuminaro, Ray S.
1989-01-01
We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
NASA Astrophysics Data System (ADS)
Mädler, Thomas
2013-05-01
Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
An approximate generalized linear model with random effects for informative missing data.
Follmann, D; Wu, M
1995-03-01
This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.
Optimization of light quality from color mixing light-emitting diode systems for general lighting
NASA Astrophysics Data System (ADS)
Thorseth, Anders
2012-03-01
Given the problem of metamerisms inherent in color mixing in light-emitting diode (LED) systems with more than three distinct colors, a method for optimizing the spectral output of multicolor LED system with regards to standardized light quality parameters has been developed. The composite spectral power distribution from the LEDs are simulated using spectral radiometric measurements of single commercially available LEDs for varying input power, to account for the efficiency droop and other non-linear effects in electrical power vs. light output. The method uses electrical input powers as input parameters in a randomized steepest decent optimization. The resulting spectral power distributions are evaluated with regard to the light quality using the standard characteristics: CIE color rendering index, correlated color temperature and chromaticity distance. The results indicate Pareto optimal boundaries for each system, mapping the capabilities of the simulated lighting systems with regard to the light quality characteristics.
EEG and MEG data analysis in SPM8.
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.
EEG and MEG Data Analysis in SPM8
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221
Explicitly-correlated Gaussian geminals in electronic structure calculations
NASA Astrophysics Data System (ADS)
Szalewicz, Krzysztof; Jeziorski, Bogumił
2010-11-01
Explicitly correlated functions have been used since 1929, but initially only for two-electron systems. In 1960, Boys and Singer showed that if the correlating factor is of Gaussian form, many-electron integrals can be computed for general molecules. The capability of explicitly correlated Gaussian (ECG) functions to accurately describe many-electron atoms and molecules was demonstrated only in the early 1980s when Monkhorst, Zabolitzky and the present authors cast the many-body perturbation theory (MBPT) and coupled cluster (CC) equations as a system of integro-differential equations and developed techniques of solving these equations with two-electron ECG functions (Gaussian-type geminals, GTG). This work brought a new accuracy standard to MBPT/CC calculations. In 1985, Kutzelnigg suggested that the linear r 12 correlating factor can also be employed if n-electron integrals, n > 2, are factorised with the resolution of identity. Later, this factor was replaced by more general functions f (r 12), most often by ? , usually represented as linear combinations of Gaussian functions which makes the resulting approach (called F12) a special case of the original GTG expansion. The current state-of-art is that, for few-electron molecules, ECGs provide more accurate results than any other basis available, but for larger systems the F12 approach is the method of choice, giving significant improvements over orbital calculations.
Resultant as the determinant of a Koszul complex
NASA Astrophysics Data System (ADS)
Anokhina, A. S.; Morozov, A. Yu.; Shakirov, Sh. R.
2009-09-01
The determinant is a very important characteristic of a linear map between vector spaces. Two generalizations of linear maps are intensively used in modern theory: linear complexes (nilpotent chains of linear maps) and nonlinear maps. The determinant of a complex and the resultant are then the corresponding generalizations of the determinant of a linear map. It turns out that these two quantities are related: the resultant of a nonlinear map is the determinant of the corresponding Koszul complex. We give an elementary introduction into these notions and relations, which will definitely play a role in the future development of theoretical physics.
Vanilla technicolor at linear colliders
NASA Astrophysics Data System (ADS)
Frandsen, Mads T.; Järvinen, Matti; Sannino, Francesco
2011-08-01
We analyze the reach of linear colliders for models of dynamical electroweak symmetry breaking. We show that linear colliders can efficiently test the compositeness scale, identified with the mass of the new spin-one resonances, until the maximum energy in the center of mass of the colliding leptons. In particular we analyze the Drell-Yan processes involving spin-one intermediate heavy bosons decaying either leptonically or into two standard model gauge bosons. We also analyze the light Higgs production in association with a standard model gauge boson stemming also from an intermediate spin-one heavy vector.
Bisimulation equivalence of differential-algebraic systems
NASA Astrophysics Data System (ADS)
Megawati, Noorma Yulia; Schaft, Arjan van der
2018-01-01
In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.
NASA Technical Reports Server (NTRS)
Press, Harry; Mazelsky, Bernard
1954-01-01
The applicability of some results from the theory of generalized harmonic analysis (or power-spectral analysis) to the analysis of gust loads on airplanes in continuous rough air is examined. The general relations for linear systems between power spectrums of a random input disturbance and an output response are used to relate the spectrum of airplane load in rough air to the spectrum of atmospheric gust velocity. The power spectrum of loads is shown to provide a measure of the load intensity in terms of the standard deviation (root mean square) of the load distribution for an airplane in flight through continuous rough air. For the case of a load output having a normal distribution, which appears from experimental evidence to apply to homogeneous rough air, the standard deviation is shown to describe the probability distribution of loads or the proportion of total time that the load has given values. Thus, for airplane in flight through homogeneous rough air, the probability distribution of loads may be determined from a power-spectral analysis. In order to illustrate the application of power-spectral analysis to gust-load analysis and to obtain an insight into the relations between loads and airplane gust-response characteristics, two selected series of calculations are presented. The results indicate that both methods of analysis yield results that are consistent to a first approximation.
Using Linear and Quadratic Functions to Teach Number Patterns in Secondary School
ERIC Educational Resources Information Center
Kenan, Kok Xiao-Feng
2017-01-01
This paper outlines an approach to definitively find the general term in a number pattern, of either a linear or quadratic form, by using the general equation of a linear or quadratic function. This approach is governed by four principles: (1) identifying the position of the term (input) and the term itself (output); (2) recognising that each…
Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence
NASA Astrophysics Data System (ADS)
Lynn, Jacob William
We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no acceleration; resonance-broadening modifies this conclusion and allows for a continued Fermi-like acceleration process. This may affect the observed spectra of black hole accretion disks by accelerating relativistic particles into a quasi-powerlaw tail.
Viscoelastic analysis of adhesively bonded joints
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1981-01-01
In this paper an adhesively bonded lap joint is analyzed by assuming that the adherends are elastic and the adhesive is linearly viscoelastic. After formulating the general problem a specific example for two identical adherends bonded through a three parameter viscoelastic solid adhesive is considered. The standard Laplace transform technique is used to solve the problem. The stress distribution in the adhesive layer is calculated for three different external loads namely, membrane loading, bending, and transverse shear loading. The results indicate that the peak value of the normal stress in the adhesive is not only consistently higher than the corresponding shear stress but also decays slower.
Model-based multi-fringe interferometry using Zernike polynomials
NASA Astrophysics Data System (ADS)
Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan
2018-06-01
In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.
Ferguson, Ian D; Weiser, Peter; Torok, Kathryn S
2015-01-01
Herein we report successful treatment of an adolescent Caucasian female with severe progressive localized scleroderma (mixed subtype, including generalized morphea and linear scleroderma of the trunk/limb) using infliximab and leflunomide. The patient demonstrated improvement after the first 9 months of therapy based on her clinical examination, objective measures, and patient and parent global assessments. Infliximab is a potential treatment option for pediatric localized scleroderma patients who have progression of disease or who are unable to tolerate the side effect profile of more standard systemic therapy. Larger longitudinal studies or case series are needed to confirm and further investigate infliximab's role in localized scleroderma.
Quantum corrections to the generalized Proca theory via a matter field
NASA Astrophysics Data System (ADS)
Amado, André; Haghani, Zahra; Mohammadi, Azadeh; Shahidi, Shahab
2017-09-01
We study the quantum corrections to the generalized Proca theory via matter loops. We consider two types of interactions, linear and nonlinear in the vector field. Calculating the one-loop correction to the vector field propagator, three- and four-point functions, we show that the non-linear interactions are harmless, although they renormalize the theory. The linear matter-vector field interactions introduce ghost degrees of freedom to the generalized Proca theory. Treating the theory as an effective theory, we calculate the energy scale up to which the theory remains healthy.
A General Accelerated Degradation Model Based on the Wiener Process.
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-12-06
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
A General Accelerated Degradation Model Based on the Wiener Process
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-01-01
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107
Zuo, Shan; Song, Yongduan; Lewis, Frank L; Davoudi, Ali
2017-01-04
This paper studies the output containment control of linear heterogeneous multi-agent systems, where the system dynamics and even the state dimensions can generally be different. Since the states can have different dimensions, standard results from state containment control do not apply. Therefore, the control objective is to guarantee the convergence of the output of each follower to the dynamic convex hull spanned by the outputs of leaders. This can be achieved by making certain output containment errors go to zero asymptotically. Based on this formulation, two different control protocols, namely, full-state feedback and static output-feedback, are designed based on internal model principles. Sufficient local conditions for the existence of the proposed control protocols are developed in terms of stabilizing the local followers' dynamics and satisfying a certain H∞ criterion. Unified design procedures to solve the proposed two control protocols are presented by formulation and solution of certain local state-feedback and static output-feedback problems, respectively. Numerical simulations are given to validate the proposed control protocols.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gorelick, Steven M.; Voss, Clifford I.; Gill, Philip E.; Murray, Walter; Saunders, Michael A.; Wright, Margaret H.
1984-04-01
A simulation-management methodology is demonstrated for the rehabilitation of aquifers that have been subjected to chemical contamination. Finite element groundwater flow and contaminant transport simulation are combined with nonlinear optimization. The model is capable of determining well locations plus pumping and injection rates for groundwater quality control. Examples demonstrate linear or nonlinear objective functions subject to linear and nonlinear simulation and water management constraints. Restrictions can be placed on hydraulic heads, stresses, and gradients, in addition to contaminant concentrations and fluxes. These restrictions can be distributed over space and time. Three design strategies are demonstrated for an aquifer that is polluted by a constant contaminant source: they are pumping for contaminant removal, water injection for in-ground dilution, and a pumping, treatment, and injection cycle. A transient model designs either contaminant plume interception or in-ground dilution so that water quality standards are met. The method is not limited to these cases. It is generally applicable to the optimization of many types of distributed parameter systems.
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
An extension of the Laplace transform to Schwartz distributions
NASA Technical Reports Server (NTRS)
Price, D. R.
1974-01-01
A characterization of the Laplace transform is developed which extends the transform to the Schwartz distributions. The class of distributions includes the impulse functions and other singular functions which occur as solutions to ordinary and partial differential equations. The standard theorems on analyticity, uniqueness, and invertibility of the transform are proved by using the characterization as the definition of the Laplace transform. The definition uses sequences of linear transformations on the space of distributions which extends the Laplace transform to another class of generalized functions, the Mikusinski operators. It is shown that the sequential definition of the transform is equivalent to Schwartz' extension of the ordinary Laplace transform to distributions but, in contrast to Schwartz' definition, does not use the distributional Fourier transform. Several theorems concerning the particular linear transformations used to define the Laplace transforms are proved. All the results proved in one dimension are extended to the n-dimensional case, but proofs are presented only for those situations that require methods different from their one-dimensional analogs.
Explicit methods in extended phase space for inseparable Hamiltonian problems
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2015-03-01
We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.
Fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds.
Ozcan, Hakki Mevlut; Sagiroglu, Ayten
2014-08-01
In this study, a novel fresh broad (Vicia faba) tissue homogenate-based biosensor for determination of phenolic compounds was developed. The biosensor was constructed by immobilizing tissue homogenate of fresh broad (Vicia faba) on to glassy carbon electrode. For the stability of the biosensor, general immobilization techniques were used to secure the fresh broad tissue homogenate in gelatin-glutaraldehyde cross-linking matrix. In the optimization and characterization studies, the amount of fresh broad tissue homogenate and gelatin, glutaraldehyde percentage, optimum pH, optimum temperature and optimum buffer concentration, thermal stability, interference effects, linear range, storage stability, repeatability and sample applications (Wine, beer, fruit juices) were also investigated. Besides, the detection ranges of thirteen phenolic compounds were obtained with the help of the calibration graphs. A typical calibration curve for the sensor revealed a linear range of 5-60 μM catechol. In reproducibility studies, variation coefficient (CV) and standard deviation (SD) were calculated as 1.59%, 0.64×10(-3) μM, respectively.
Time-dependent behavior of rough discontinuities under shearing conditions
NASA Astrophysics Data System (ADS)
Wang, Zhen; Shen, Mingrong; Ding, Wenqi; Jang, Boan; Zhang, Qingzhao
2018-02-01
The mechanical properties of rocks are generally controlled by their discontinuities. In this study, the time-dependent behavior of rough artificial joints under shearing conditions was investigated. Based on Barton’s standard profile lines, samples with artificial joint surfaces were prepared and used to conduct the shear and creep tests. The test results showed that the shear strength of discontinuity was linearly related to roughness, and subsequently an empirical equation was established. The long-term strength of discontinuity can be identified using the inflection point of the isocreep-rate curve, and it was linearly related to roughness. Furthermore, the ratio of long-term and instantaneous strength decreased with the increase of roughness. The shear-stiffness coefficient increased with the increase of shear rate, and the influence of shear rate on the shear stiffness coefficient decreased with the decrease of roughness. Further study of the mechanism revealed that these results could be attributed to the different time-dependent behavior of intact and joint rocks.
Fixed Point Problems for Linear Transformations on Pythagorean Triples
ERIC Educational Resources Information Center
Zhan, M.-Q.; Tong, J.-C.; Braza, P.
2006-01-01
In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…
Quantifying relative importance: Computing standardized effects in models with binary outcomes
Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.
2018-01-01
Results from simulation studies show that both the LT and OE methods of standardization support a similarly-broad range of coefficient comparisons. The LT method estimates effects that reflect underlying latent-linear propensities, while the OE method computes a linear approximation for the effects of predictors on binary responses. The contrast between assumptions for the two methods is reflected in persistently weaker standardized effects associated with OE standardization. Reliance on standard deviations for standardization (the traditional approach) is critically examined and shown to introduce substantial biases when predictors are non-Gaussian. The use of relevant ranges in place of standard deviations has the capacity to place LT and OE standardized coefficients on a more comparable scale. As ecologists address increasingly complex hypotheses, especially those that involve comparing the influences of different controlling factors (e.g., top-down versus bottom-up or biotic versus abiotic controls), comparable coefficients become a necessary component for evaluations.
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qichun; Zhang, Xuesong; Xu, Xingya
Riverine carbon cycling is an important, but insufficiently investigated component of the global carbon cycle. Analyses of environmental controls on riverine carbon cycling are critical for improved understanding of mechanisms regulating carbon processing and storage along the terrestrial-aquatic continuum. Here, we compile and analyze riverine dissolved organic carbon (DOC) concentration data from 1402 United States Geological Survey (USGS) gauge stations to examine the spatial variability and environmental controls of DOC concentrations in the United States (U.S.) surface waters. DOC concentrations exhibit high spatial variability, with an average of 6.42 ± 6.47 mg C/ L (Mean ± Standard Deviation). In general,more » high DOC concentrations occur in the Upper Mississippi River basin and the Southeastern U.S., while low concentrations are mainly distributed in the Western U.S. Single-factor analysis indicates that slope of drainage areas, wetlands, forests, percentage of first-order streams, and instream nutrients (such as nitrogen and phosphorus) pronouncedly influence DOC concentrations, but the explanatory power of each bivariate model is lower than 35%. Analyses based on the general multi-linear regression models suggest DOC concentrations are jointly impacted by multiple factors. Soil properties mainly show positive correlations with DOC concentrations; forest and shrub lands have positive correlations with DOC concentrations, but urban area and croplands demonstrate negative impacts; total instream phosphorus and dam density correlate positively with DOC concentrations. Notably, the relative importance of these environmental controls varies substantially across major U.S. water resource regions. In addition, DOC concentrations and environmental controls also show significant variability from small streams to large rivers, which may be caused by changing carbon sources and removal rates by river orders. In sum, our results reveal that general multi-linear regression analysis of twenty one terrestrial and aquatic environmental factors can partially explain (56%) the DOC concentration variation. In conclusion, this study highlights the complexity of the interactions among these environmental factors in determining DOC concentrations, thus calls for processes-based, non-linear methodologies to constrain uncertainties in riverine DOC cycling.« less
Variational coarse-graining procedure for dynamic homogenization
NASA Astrophysics Data System (ADS)
Liu, Chenchen; Reina, Celia
2017-07-01
We present a variational coarse-graining framework for heterogeneous media in the spirit of FE2 methods, that allows for a seamless transition from the traditional static scenario to dynamic loading conditions, while being applicable to general material behavior as well as to discrete or continuous representations of the material and its deformation, e.g., finite element discretizations or atomistic systems. The method automatically delivers the macroscopic equations of motion together with the generalization of Hill's averaging relations to the dynamic setting. These include the expression of the macroscopic stresses and linear momentum as a function of the microscopic fields. We further demonstrate with a proof of concept example, that the proposed theoretical framework can be used to perform multiscale numerical simulations. The results are compared with standard single-scale finite element simulations, showcasing the capability of the method to capture the dispersive nature of the medium in the range of frequencies permitted by the multiscale strategy.
Effective quadrature formula in solving linear integro-differential equations of order two
NASA Astrophysics Data System (ADS)
Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.
2017-08-01
In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.
Obaidi, Leath Al; Mahlich, Jörg
2015-01-01
There are several methodologies that can be used for evaluating patients' perception of their quality of life. Most commonly, utilities are directly elicited by means of either the time-trade-off or the standard-gamble method. In both methods, risk attitudes determine the quality of life values. Quality of life values among 31 Austrian undergraduate students were elicited by means of the standard gamble approach. The impact of several variables such as gender, side job, length of study, and living arrangements on the quality of life were identified using different types of regression techniques (ordinary least squares, generalized linear model, Betafit). Significant evidence was found that females are associated with a higher quality of life in all specifications of our estimations. The observed gender differences in quality of life can be attributed to a higher degree of risk aversion of women. A higher risk aversion leads to a higher valuation of given health states and a potential gender bias in health economic evaluations. This result could have implications for health policy planners when it comes to budget allocation decisions.
Precision, accuracy and linearity of radiometer EML 105 whole blood metabolite biosensors.
Cobbaert, C; Morales, C; van Fessem, M; Kemperman, H
1999-11-01
The analytical performance of a new, whole blood glucose and lactate electrode system (EML 105 analyser. Radiometer Medical A/S. Copenhagen, Denmark) was evaluated. Between-day coefficients of variation were < or = 1.9% and < or = 3.1% for glucose and lactate, respectively. Recoveries of glucose were 100 +/- 10% using either aqueous or protein-based standards. Recoveries of lactate depended on the matrix, being underestimated in aqueous standards (approximately -10%) and 95-100% in standards containing 40 g/L albumin at lactate concentrations of 15 and 30 mmol/L. However, recoveries were high (up to 180%) at low lactate concentrations in protein-based standards. Carry-over, investigated according to National Clinical Chemistry Laboratory Standards EP10-T2, was negligible (alpha = 0.01). Glucose and lactate biosensors equipped with new membranes were linear up to 60 and 30 mmol/L, respectively. However, linearity fell upon daily use with increasing membrane lifetime. We conclude that the Radiometer metabolite biosensor results are reproducible and do not suffer from specimen-related carry-over. However, lactate recovery depends on the protein content and the lactate concentration.
The linear sizes tolerances and fits system modernization
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.
2018-04-01
The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.
Webb Hooper, Monica; Antoni, Michael H; Okuyemi, Kolawole; Dietz, Noella A; Resnicow, Ken
2017-03-01
This study tested the efficacy of group-based culturally specific cognitive behavioral therapy (CBT) for smoking cessation among low-income African Americans. Participants (N = 342; 63.8% male; M = 49.5 years old; M cigarettes per day = 18) were randomly assigned to eight sessions of group-based culturally specific or standard CBT, plus 8 weeks of transdermal nicotine patches. Biochemically verified 7-day point prevalence abstinence (ppa) was assessed at the end-of-therapy (ie, CBT) (EOT), and 3-, 6-, and 12-month follow-ups. Primary outcomes were the longitudinal intervention effect over the 12-month follow-up period, and 7-day ppa at the 6-month follow-up. Secondary outcomes included 7-day ppa at the EOT and 12-month follow-up, and intervention ratings. Generalized linear mixed modeling tested the longitudinal effect and logistic regression tested effects at specific timepoints. Generalized linear mixed modeling demonstrated a longitudinal effect of intervention condition. Specifically, 7-day ppa was two times (P = .02) greater following culturally specific CBT versus standard CBT when tested across all timepoints. Analyses by timepoint found no significant difference at 6 or 12 months, yet culturally specific CBT was efficacious at the EOT (62.5% vs. 51.5% abstinence, P = .05) and the 3-month follow-up (36.4% vs. 22.9% abstinence, P = .007). Finally, intervention ratings in both conditions were high, with no significant differences. Culturally specific CBT had a positive longitudinal effect on smoking cessation compared to a standard approach; however, the effects were driven by short-term successes. We recommend the use of group-based culturally specific CBT in this population when possible, and future research on methods to prevent long-term relapse. Culturally specific interventions are one approach to address smoking-related health disparities; however, evidence for their efficacy in African Americans is equivocal. Moreover, the methodological limitations of the existing literature preclude an answer to this fundamental question. We found a positive longitudinal effect of culturally specific CBT versus standard CBT for smoking cessation across the follow-up period. Analyses by assessment point revealed that the overall effect was driven by early successes. Best practices for treating tobacco use in this population should attend to ethnocultural factors, but when this is not possible, standard CBT is an alternative approach for facilitating long-term abstinence. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Effect of step width manipulation on tibial stress during running.
Meardon, Stacey A; Derrick, Timothy R
2014-08-22
Narrow step width has been linked to variables associated with tibial stress fracture. The purpose of this study was to evaluate the effect of step width on bone stresses using a standardized model of the tibia. 15 runners ran at their preferred 5k running velocity in three running conditions, preferred step width (PSW) and PSW±5% of leg length. 10 successful trials of force and 3-D motion data were collected. A combination of inverse dynamics, musculoskeletal modeling and beam theory was used to estimate stresses applied to the tibia using subject-specific anthropometrics and motion data. The tibia was modeled as a hollow ellipse. Multivariate analysis revealed that tibial stresses at the distal 1/3 of the tibia differed with step width manipulation (p=0.002). Compression on the posterior and medial aspect of the tibia was inversely related to step width such that as step width increased, compression on the surface of tibia decreased (linear trend p=0.036 and 0.003). Similarly, tension on the anterior surface of the tibia decreased as step width increased (linear trend p=0.029). Widening step width linearly reduced shear stress at all 4 sites (p<0.001 for all). The data from this study suggests that stresses experienced by the tibia during running were influenced by step width when using a standardized model of the tibia. Wider step widths were generally associated with reduced loading of the tibia and may benefit runners at risk of or experiencing stress injury at the tibia, especially if they present with a crossover running style. Copyright © 2014 Elsevier Ltd. All rights reserved.
Thomas, Michael L; Kaufmann, Christopher N; Palmer, Barton W; Depp, Colin A; Martin, Averria Sirkin; Glorioso, Danielle K; Thompson, Wesley K; Jeste, Dilip V
2016-08-01
Studies of aging usually focus on trajectories of physical and cognitive function, with far less emphasis on overall mental health, despite its impact on general health and mortality. This study examined linear and nonlinear trends of physical, cognitive, and mental health over the entire adult lifespan. Cross-sectional data were obtained from 1,546 individuals aged 21-100 years, selected using random digit dialing for the Successful AGing Evaluation (SAGE) study, a structured multicohort investigation that included telephone interviews and in-home surveys of community-based adults without dementia. Data were collected from 1/26/2010 to 10/07/2011 targeting participants aged 50-100 years and from 6/25/2012 to 7/15/2013 targeting participants aged 21-100 years with an emphasis on adding younger individuals. Data included self-report measures of physical health, measures of both positive and negative attributes of mental health, and a phone interview-based measure of cognition. Comparison of age cohorts using polynomial regression suggested a possible accelerated deterioration in physical and cognitive functioning, averaging 1.5 to 2 standard deviations over the adult lifespan. In contrast, there appeared to be a linear improvement of about 1 standard deviation in various attributes of mental health over the same life period. These cross-sectional findings suggest the possibility of a linear improvement in mental health beginning in young adulthood rather than a U-shaped curve reported in some prior studies. Lifespan research combining psychosocial and biological markers may improve our understanding of resilience to mental disability in older age and lead to broad-based interventions promoting mental health in all age groups. © Copyright 2016 Physicians Postgraduate Press, Inc.
Incorporating inductances in tissue-scale models of cardiac electrophysiology
NASA Astrophysics Data System (ADS)
Rossi, Simone; Griffith, Boyce E.
2017-09-01
In standard models of cardiac electrophysiology, including the bidomain and monodomain models, local perturbations can propagate at infinite speed. We address this unrealistic property by developing a hyperbolic bidomain model that is based on a generalization of Ohm's law with a Cattaneo-type model for the fluxes. Further, we obtain a hyperbolic monodomain model in the case that the intracellular and extracellular conductivity tensors have the same anisotropy ratio. In one spatial dimension, the hyperbolic monodomain model is equivalent to a cable model that includes axial inductances, and the relaxation times of the Cattaneo fluxes are strictly related to these inductances. A purely linear analysis shows that the inductances are negligible, but models of cardiac electrophysiology are highly nonlinear, and linear predictions may not capture the fully nonlinear dynamics. In fact, contrary to the linear analysis, we show that for simple nonlinear ionic models, an increase in conduction velocity is obtained for small and moderate values of the relaxation time. A similar behavior is also demonstrated with biophysically detailed ionic models. Using the Fenton-Karma model along with a low-order finite element spatial discretization, we numerically analyze differences between the standard monodomain model and the hyperbolic monodomain model. In a simple benchmark test, we show that the propagation of the action potential is strongly influenced by the alignment of the fibers with respect to the mesh in both the parabolic and hyperbolic models when using relatively coarse spatial discretizations. Accurate predictions of the conduction velocity require computational mesh spacings on the order of a single cardiac cell. We also compare the two formulations in the case of spiral break up and atrial fibrillation in an anatomically detailed model of the left atrium, and we examine the effect of intracellular and extracellular inductances on the virtual electrode phenomenon.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Gasson, Natalie; Johnson, Andrew R.; Booth, Leon; Loftus, Andrea M.
2018-01-01
This study examined whether standard cognitive training, tailored cognitive training, transcranial direct current stimulation (tDCS), standard cognitive training + tDCS, or tailored cognitive training + tDCS improved cognitive function and functional outcomes in participants with PD and mild cognitive impairment (PD-MCI). Forty-two participants with PD-MCI were randomized to one of six groups: (1) standard cognitive training, (2) tailored cognitive training, (3) tDCS, (4) standard cognitive training + tDCS, (5) tailored cognitive training + tDCS, or (6) a control group. Interventions lasted 4 weeks, with cognitive and functional outcomes measured at baseline, post-intervention, and follow-up. The trial was registered with the Australian New Zealand Clinical Trials Registry (ANZCTR: 12614001039673). While controlling for moderator variables, Generalized Linear Mixed Models (GLMMs) showed that when compared to the control group, the intervention groups demonstrated variable statistically significant improvements across executive function, attention/working memory, memory, language, activities of daily living (ADL), and quality of life (QOL; Hedge's g range = 0.01 to 1.75). More outcomes improved for the groups that received standard or tailored cognitive training combined with tDCS. Participants with PD-MCI receiving cognitive training (standard or tailored) or tDCS demonstrated significant improvements on cognitive and functional outcomes, and combining these interventions provided greater therapeutic effects. PMID:29780572
[Static Retinal Vessel Analysis in Population-based Study SHIP-Trend].
Theophil, Constanze; Jürgens, Clemens; Großjohann, Rico; Kempin, Robert; Ittermann, Till; Nauck, Matthias; Völzke, Henry; Tost, Frank H W
2017-08-24
Background Interdisciplinary investigations of possible connections between general diseases and ophthalmological changes are difficult to perform in the clinical environment. But they are gaining in importance as a result of the age-related increase in chronic diseases. The collection of health-related parameters in the Study of Health in Pomerania (SHIP) project allows to derive conclusions for the general population. Methods The population-based SHIP trend study was conducted between 2008 and 2012 in Greifswald. The baseline cohort included 4420 subjects (response 50.1%) at the age of 20 to 84 years. The pre-existence of arterial hypertension, diabetes mellitus and smoking status were questioned in a standardized questionnaire, the blood pressure and the HbA 1c were determined by the laboratory. The vascular diameter of retinal arterioles and venules were determined by means of non-mydriatic fundus images and the retinal arterial (CRAE) and venous equivalent (CRVE) were calculated therefrom. The association of diabetes mellitus, HbA 1c , smoking status and blood pressure with the retinal vascular parameters was tested for age and sex with linear regression models. Results In 3218 subjects with evaluable standardized fundus photographs, significant associations of elevated HbA 1c (> 6.5%), smoking status and systolic and diastolic blood pressure were found with the retinal vessel widths CRAE and CRVE. Anamnestic diabetes mellitus, on the other hand, was not associated with any of the vascular parameters. Conclusion This research study reveals a relevant correlation between general diseases and the retinal blood flow in the eye. Therefore, general diseases can induce ophthalmological changes and eye examination can provide information for the assessment of general diseases. Georg Thieme Verlag KG Stuttgart · New York.
Frequency-domain full-waveform inversion with non-linear descent directions
NASA Astrophysics Data System (ADS)
Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.
2018-05-01
Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.
Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.
Mazandarani, Mehran; Pariz, Naser
2018-05-01
This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
NASA Astrophysics Data System (ADS)
Ünver, H.
2017-02-01
A main focus of this research paper is to investigate on the explanation of the ‘digital inequality’ or ‘digital divide’ by economic level and education standard of about 150 countries worldwide. Inequality regarding GDP per capita, literacy and the so-called UN Education Index seem to be important factors affecting ICT usage, in particular Internet penetration, mobile phone usage and also mobile Internet services. Empirical methods and (multivariate) regression analysis with linear and non-linear functions are useful methods to measure some crucial factors of a country or culture towards becoming information and knowledge based society. Overall, the study concludes that the convergence regarding ICT usage proceeds worldwide faster than the convergence in terms of economic wealth and education in general. The results based on a large data analysis show that the digital divide is declining over more than a decade between 2000 and 2013, since more people worldwide use mobile phones and the Internet. But a high digital inequality explained to a significant extent by the functional relation between technology penetration rates, education level and average income still exists. Furthermore it supports the actions of countries at UN/G20/OECD level for providing ICT access to all people for a more balanced world in context of sustainable development by postulating that policymakers need to promote comprehensive education worldwide by means of using ICT.
Peng, Lingling; Li, Yi; Feng, Hao
2017-07-14
Reference crop evapotranspiration (ET o ) is a critically important parameter for climatological, hydrological and agricultural management. The FAO56 Penman-Monteith (PM) equation has been recommended as the standardized ET o (ET o,s ) equation, but it has a high requirements of climatic data. There is a practical need for finding a best alternative method to estimate ET o in the regions where full climatic data are lacking. A comprehensive comparison for the spatiotemporal variations, relative errors, standard deviations and Nash-Sutcliffe efficacy coefficients of monthly or annual ET o,s and ET o,i (i = 1, 2, …, 10) values estimated by 10 selected methods (i.e., Irmak et al., Makkink, Priestley-Taylor, Hargreaves-Samani, Droogers-Allen, Berti et al., Doorenbos-Pruitt, Wright and Valiantzas, respectively) using data at 552 sites over 1961-2013 in mainland China. The method proposed by Berti et al. (2014) was selected as the best alternative of FAO56-PM because it was simple in computation process, only utilized temperature data, had generally good accuracy in describing spatiotemporal characteristics of ET o,s in different sub-regions and mainland China, and correlated linearly to the FAO56-PM method very well. The parameters of the linear correlations between ET o of the two methods are calibrated for each site with the smallest determination of coefficient being 0.87.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbareschi, Daniele; et al.
We describe a general purpose detector ( "Fourth Concept") at the International Linear Collider (ILC) that can measure with high precision all the fundamental fermions and bosons of the standard model, and thereby access all known physics processes. The 4th concept consists of four basic subsystems: a pixel vertex detector for high precision vertex definitions, impact parameter tagging and near-beam occupancy reduction; a Time Projection Chamber for robust pattern recognition augmented with three high-precision pad rows for precision momentum measurement; a high precision multiple-readout fiber calorimeter, complemented with an EM dual-readout crystal calorimeter, for the energy measurement of hadrons, jets,more » electrons, photons, missing momentum, and the tagging of muons; and, an iron-free dual-solenoid muon system for the inverse direction bending of muons in a gas volume to achieve high acceptance and good muon momentum resolution. The pixel vertex chamber, TPC and calorimeter are inside the solenoidal magnetic field. All four subsytems separately achieve the important scientific goal to be 2-to-10 times better than the already excellent LEP detectors, ALEPH, DELPHI, L3 and OPAL. All four basic subsystems contribute to the identification of standard model partons, some in unique ways, such that consequent physics studies are cogent. As an integrated detector concept, we achieve comprehensive physics capabilities that puts all conceivable physics at the ILC within reach.« less
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Liu, Lan; Jiang, Tao
2007-01-01
With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.
NASA Astrophysics Data System (ADS)
Wu, Bofeng; Huang, Chao-Guang
2018-04-01
The 1 /r expansion in the distance to the source is applied to the linearized f (R ) gravity, and its multipole expansion in the radiation field with irreducible Cartesian tensors is presented. Then, the energy, momentum, and angular momentum in the gravitational waves are provided for linearized f (R ) gravity. All of these results have two parts, which are associated with the tensor part and the scalar part in the multipole expansion of linearized f (R ) gravity, respectively. The former is the same as that in General Relativity, and the latter, as the correction to the result in General Relativity, is caused by the massive scalar degree of freedom and plays an important role in distinguishing General Relativity and f (R ) gravity.
Alcohol use in the military: associations with health and wellbeing.
Waller, Michael; McGuire, Annabel C L; Dobson, Annette J
2015-07-28
This study assessed the extent to which alcohol consumption in a military group differed from the general population, and how alcohol affected the military group's health and social functioning. A cross sectional survey of military personnel (n = 5311) collected self-reported data on alcohol use (AUDIT scale) and general health, role limitations because of physical health problems (role physical), and social functioning scores (SF36 subscales). Logistic regression was used to compare drinking behaviours between the military sample and a general population sample, using the categories risky drinkers (>2 units per day), low risk drinkers (≤2 standard drinks per day) and abstainers. Groups in the military sample with the highest levels of alcohol misuse (harmful drinking AUDIT ≥ 16, alcohol dependence AUDIT ≥ 20, and binge drinking) were also identified. Linear regression models were then used to assess the association between alcohol misuse and SF36 scores. There were fewer risky drinkers in the military sample than in the general population sample. There were also fewer abstainers, but more people who drank at a lower risk level (≤2 standard drinks per day), than in a sample of the general population. Harmful drinking and alcohol dependence were most commonly observed in men, younger age groups, non-commissioned officers and lower ranks as well as reserve and ex-serving groups. Alcohol misuse was clearly associated with poorer general health scores, more role limitations because of physical health problems, and lower social functioning. Although risky drinking was lower in the military group than in the general population, drinking was associated with poorer health, more limitations because of physical health problems, and poorer social functioning in Defence members. These results highlight the potential benefits for Defence forces in reducing alcohol use among members, in both those groups identified at highest risk, and across the military workforce as a whole.
Generalized Bezout's Theorem and its applications in coding theory
NASA Technical Reports Server (NTRS)
Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.
Williams, Rachel E; Arabi, Mazdak; Loftis, Jim; Elmund, G Keith
2014-09-01
Implementation of numeric nutrient standards in Colorado has prompted a need for greater understanding of human impacts on ambient nutrient levels. This study explored the variability of annual nutrient concentrations due to upstream anthropogenic influences and developed a mathematical expression for the number of samples required to estimate median concentrations for standard compliance. A procedure grounded in statistical hypothesis testing was developed to estimate the number of annual samples required at monitoring locations while taking into account the difference between the median concentrations and the water quality standard for a lognormal population. For the Cache La Poudre River in northern Colorado, the relationship between the median and standard deviation of total N (TN) and total P (TP) concentrations and the upstream point and nonpoint concentrations and general hydrologic descriptors was explored using multiple linear regression models. Very strong relationships were evident between the upstream anthropogenic influences and annual medians for TN and TP ( > 0.85, < 0.001) and corresponding standard deviations ( > 0.7, < 0.001). Sample sizes required to demonstrate (non)compliance with the standard depend on the measured water quality conditions. When the median concentration differs from the standard by >20%, few samples are needed to reach a 95% confidence level. When the median is within 20% of the corresponding water quality standard, however, the required sample size increases rapidly, and hundreds of samples may be required. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Bacheler, N.M.; Hightower, J.E.; Burdick, S.M.; Paramore, L.M.; Buckel, J.A.; Pollock, K.H.
2010-01-01
Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated. ?? 2009 Elsevier B.V.
Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.
2010-01-01
Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gómez-Valent, Adrià; Karimkhani, Elahe; Solà, Joan, E-mail: adriagova@ecm.ub.edu, E-mail: e.karimkhani91@basu.ac.ir, E-mail: sola@ecm.ub.edu
We determine the Hubble expansion and the general cosmic perturbation equations for a general system consisting of self-conserved matter, ρ{sub m}, and self-conserved dark energy (DE), ρ{sub D}. While at the background level the two components are non-interacting, they do interact at the perturbations level. We show that the coupled system of matter and DE perturbations can be transformed into a single, third order, matter perturbation equation, which reduces to the (derivative of the) standard one in the case that the DE is just a cosmological constant. As a nontrivial application we analyze a class of dynamical models whose DEmore » density ρ{sub D}(H) consists of a constant term, C{sub 0}, and a series of powers of the Hubble rate. These models were previously analyzed from the point of view of dynamical vacuum models, but here we treat them as self-conserved DE models with a dynamical equation of state. We fit them to the wealth of expansion history and linear structure formation data and compare their fit quality with that of the concordance ΛCDM model. Those with C{sub 0}=0 include the so-called ''entropic-force'' and ''QCD-ghost'' DE models, as well as the pure linear model ρ{sub D}∼H, all of which appear strongly disfavored. The models with C{sub 0}≠0 , in contrast, emerge as promising dynamical DE candidates whose phenomenological performance is highly competitive with the rigid Λ-term inherent to the ΛCDM.« less
An Inquiry-Based Linear Algebra Class
ERIC Educational Resources Information Center
Wang, Haohao; Posey, Lisa
2011-01-01
Linear algebra is a standard undergraduate mathematics course. This paper presents an overview of the design and implementation of an inquiry-based teaching material for the linear algebra course which emphasizes discovery learning, analytical thinking and individual creativity. The inquiry-based teaching material is designed to fit the needs of a…
Angle-domain inverse scattering migration/inversion in isotropic media
NASA Astrophysics Data System (ADS)
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Capisizu, Ana; Aurelian, Sorina; Zamfirescu, Andreea; Omer, Ioana; Haras, Monica; Ciobotaru, Camelia; Onose, Liliana; Spircu, Tiberiu; Onose, Gelu
2015-01-01
To assess the impact of socio-demographic and comorbidity factors, and quantified depressive symptoms on disability in inpatients. Observational cross-sectional study, including a number of 80 elderly (16 men, 64 women; mean age 72.48 years; standard deviation 9.95 years) admitted in the Geriatrics Clinic of "St. Luca" Hospital, Bucharest, between May-July, 2012. We used the Functional Independence Measure, Geriatric Depression Scale and an array of socio-demographic and poly-pathology parameters. Statistical analysis included Wilcoxon and Kruskal-Wallis tests for ordinal variables, linear bivariate correlations, general linear model analysis, ANOVA. FIM scores were negatively correlated with age (R=-0.301; 95%CI=-0.439 -0.163; p=0.007); GDS scores had a statistically significant negative correlation (R=-0.322; 95% CI=-0.324 -0.052; p=0.004) with FIM scores. A general linear model, including other variables (gender, age, provenance, matrimonial state, living conditions, education, respectively number of chronic illnesses) as factors, found living conditions (p=0.027) and the combination of matrimonial state and gender (p=0.004) to significantly influence FIM scores. ANOVA showed significant differences in FIM scores stratified by the number of chronic diseases (p=0.035). Our study objectified the negative impact of depression on functional status; interestingly, education had no influence on FIM scores; living conditions and a combination of matrimonial state and gender had an important impact: patients with living spouses showed better functional scores than divorced/widowers; the number of chronic diseases also affected the FIM scores: lower in patients with significant polypathology. These findings should be considered when designing geriatric rehabilitation programs, especially for home--including skilled--cares.
A Few New 2+1-Dimensional Nonlinear Dynamics and the Representation of Riemann Curvature Tensors
NASA Astrophysics Data System (ADS)
Wang, Yan; Zhang, Yufeng; Zhang, Xiangzhi
2016-09-01
We first introduced a linear stationary equation with a quadratic operator in ∂x and ∂y, then a linear evolution equation is given by N-order polynomials of eigenfunctions. As applications, by taking N=2, we derived a (2+1)-dimensional generalized linear heat equation with two constant parameters associative with a symmetric space. When taking N=3, a pair of generalized Kadomtsev-Petviashvili equations with the same eigenvalues with the case of N=2 are generated. Similarly, a second-order flow associative with a homogeneous space is derived from the integrability condition of the two linear equations, which is a (2+1)-dimensional hyperbolic equation. When N=3, the third second flow associative with the homogeneous space is generated, which is a pair of new generalized Kadomtsev-Petviashvili equations. Finally, as an application of a Hermitian symmetric space, we established a pair of spectral problems to obtain a new (2+1)-dimensional generalized Schrödinger equation, which is expressed by the Riemann curvature tensors.
Next Linear Collider Home Page
Welcome to the Next Linear Collider NLC Home Page If you would like to learn about linear colliders in general and about this next-generation linear collider project's mission, design ideas, and Linear Collider. line | NLC Home | NLC Technical | SLAC | mcdunn Tuesday, February 14, 2006 01:32:11 PM
Control of Distributed Parameter Systems
1990-08-01
vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of
Preliminary SPE Phase II Far Field Ground Motion Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steedman, David W.
2014-03-06
Phase II of the Source Physics Experiment (SPE) program will be conducted in alluvium. Several candidate sites were identified. These include existing large diameter borehole U1e. One criterion for acceptance is expected far field ground motion. In June 2013 we were requested to estimate peak response 2 km from the borehole due to the largest planned SPE Phase II experiment: a contained 50- Ton event. The cube-root scaled range for this event is 5423 m/KT 1/3. The generally accepted first order estimate of ground motions from an explosive event is to refer to the standard data base for explosive eventsmore » (Perrett and Bass, 1975). This reference is a compilation and analysis of ground motion data from numerous nuclear and chemical explosive events from Nevada National Security Site (formerly the Nevada Test Site, or NTS) and other locations. The data were compiled and analyzed for various geologic settings including dry alluvium, which we believe is an accurate descriptor for the SPE Phase II setting. The Perrett and Bass plots of peak velocity and peak yield-scaled displacement, both vs. yield-scaled range, are provided here. Their analysis of both variables resulted in bi-linear fits: a close-in non-linear regime and a more distant linear regime.« less
NASA Technical Reports Server (NTRS)
Erickson, J. M.; Street, J. S. (Principal Investigator); Munsell, C. J.; Obrien, D. E.
1975-01-01
The author has identified the following significant results. ERTS-1 imagery in a variety of formats was used to locate linear, tonal, and hazy features and to relate them to areas of hydrocarbon production in the Williston Basin of North Dakota, eastern Montana, and northern South Dakota. Derivative maps of rectilinear, curvilinear, tonal, and hazy features were made using standard laboratory techniques. Mapping of rectilinears on both bands 5 and 7 over the entire region indicated the presence of a northeast-southwest and a northwest-southeast regional trend which is indicative of the bedrock fracture pattern in the basin. Curved lines generally bound areas of unique tone, maps of tonal patterns repeat many of the boundaries seen on curvilinear maps. Tones were best analyzed on spring and fall imagery in the Williston Basin. It is postulated that hazy areas are caused by atmospheric phenomena. The ability to use ERTS imagery as an exploration tool was examined where petroleum and gas are presently produced (Bottineau Field, Nesson and Antelope anticlines, Redwing Creek, and Cedar Creek anticline). It is determined that some tonal and linear features coincide with location of present production in Redwing and Cedar Creeks. In the remaining cases, targets could not be sufficiently well defined to justify this method.
Unscented Kalman Filter for Brain-Machine Interfaces
Li, Zheng; O'Doherty, Joseph E.; Hanson, Timothy L.; Lebedev, Mikhail A.; Henriquez, Craig S.; Nicolelis, Miguel A. L.
2009-01-01
Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation. PMID:19603074
Shell-free biomass and population dynamics of dreissenids in offshore Lake Michigan, 2001-2003
French, J. R. P.; Adams, J.V.; Craig, J.; Stickel, R.G.; Nichols, S.J.; Fleischer, G.W.
2007-01-01
The USGS-Great Lakes Science Center has collected dreissenid mussels annually from Lake Michigan since zebra mussels (Dreissena polymorpha) became a significant portion of the bottom-trawl catch in 1999. For this study, we investigated dreissenid distribution, body mass, and recruitment at different depths in Lake Michigan during 2001-2003. The highest densities of dreissenid biomass were observed from depths of 27 to 46 m. The biomass of quagga mussels (Dreissena bugensis) increased exponentially during 2001-2003, while that of zebra mussels did not change significantly. Body mass (standardized for a given shell length) of both species was lowest from depths of 27 to 37m, highest from 55 to 64 m, and declined linearly at deeper depths during 2001-2003. Recruitment in 2003, as characterized by the proportion of mussels < 11 mm in the catch, varied with depth and lake region. For quagga mussels, recruitment declined linearly with depth, and was highest in northern Lake Michigan. For zebra mussels, recruitment generally declined non-linearly with depth, although the pattern was different for north, mid, and southern Lake Michigan. Our analyses suggest that quagga mussels could overtake zebra mussels and become the most abundant mollusk in terms of biomass in Lake Michigan.
Preterm birth and dyscalculia.
Jaekel, Julia; Wolke, Dieter
2014-06-01
To evaluate whether the risk for dyscalculia in preterm children increases the lower the gestational age (GA) and whether small-for-gestational age birth is associated with dyscalculia. A total of 922 children ranging from 23 to 41 weeks' GA were studied as part of a prospective geographically defined longitudinal investigation of neonatal at-risk children in South Germany. At 8 years of age, children's cognitive and mathematic abilities were measured with the Kaufman Assessment Battery for Children and with a standardized mathematics test. Dyscalculia diagnoses were evaluated with discrepancy-based residuals of a linear regression predicting children's math scores by IQ and with fixed cut-off scores. We investigated each GA group's ORs for general cognitive impairment, general mathematic impairment, and dyscalculia by using binary logistic regressions. The risk for general cognitive and mathematic impairment increased with lower GA. In contrast, preterm children were not at increased risk of dyscalculia after statistically adjusting for child sex, family socioeconomic status, and small-for-gestational age birth. The risk of general cognitive and mathematic impairments increases with lower GA but preterm children are not at increased risk of dyscalculia. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Rankin, C. C.
1988-01-01
A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.
Pei, Soo-Chang; Ding, Jian-Jiun
2005-03-01
Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.
Gries, Katharine S; Regier, Dean A; Ramsey, Scott D; Patrick, Donald L
2017-06-01
To develop a statistical model generating utility estimates for prostate cancer specific health states, using preference weights derived from the perspectives of prostate cancer patients, men at risk for prostate cancer, and society. Utility estimate values were calculated using standard gamble (SG) methodology. Study participants valued 18 prostate-specific health states with the five attributes: sexual function, urinary function, bowel function, pain, and emotional well-being. Appropriateness of model (linear regression, mixed effects, or generalized estimating equation) to generate prostate cancer utility estimates was determined by paired t-tests to compare observed and predicted values. Mixed-corrected standard SG utility estimates to account for loss aversion were calculated based on prospect theory. 132 study participants assigned values to the health states (n = 40 men at risk for prostate cancer; n = 43 men with prostate cancer; n = 49 general population). In total, 792 valuations were elicited (six health states for each 132 participants). The most appropriate model for the classification system was a mixed effects model; correlations between the mean observed and predicted utility estimates were greater than 0.80 for each perspective. Developing a health-state classification system with preference weights for three different perspectives demonstrates the relative importance of main effects between populations. The predicted values for men with prostate cancer support the hypothesis that patients experiencing the disease state assign higher utility estimates to health states and there is a difference in valuations made by patients and the general population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.
1995-10-01
Aztec is an iterative library that greatly simplifies the parallelization process when solving the linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. Aztec is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparsemore » unstructured matrices for parallel solution. Once the distributed matrix is created, computation can be performed on any of the parallel machines running Aztec: nCUBE 2, IBM SP2 and Intel Paragon, MPI platforms as well as standard serial and vector platforms. Aztec includes a number of Krylov iterative methods such as conjugate gradient (CG), generalized minimum residual (GMRES) and stabilized biconjugate gradient (BICGSTAB) to solve systems of equations. These Krylov methods are used in conjunction with various preconditioners such as polynomial or domain decomposition methods using LU or incomplete LU factorizations within subdomains. Although the matrix A can be general, the package has been designed for matrices arising from the approximation of partial differential equations (PDEs). In particular, the Aztec package is oriented toward systems arising from PDE applications.« less
[Approach to the Development of Mind and Persona].
Sawaguchi, Toshiko
2018-01-01
To access medical specialists by health specialists working in the regional health field, the possibility of utilizing the voice approach for dissociative identity disorder (DID) patients as a health assessment for medical access (HAMA) was investigated. The first step is to investigate whether the plural personae in a single DID patient can be discriminated by voice analysis. Voices of DID patients including these with different personae were extracted from YouTube and were analysed using the software PRAAT with basic frequency, oral factors, chin factors and tongue factors. In addition, RAKUGO story teller voices made artificially and dramatically were analysed in the same manner. Quantitive and qualitative analysis method were carried out and nested logistic regression and a nested generalized linear model was developed. The voice from different personae in one DID patient could be visually and easily distinquished using basic frequency curve, cluster analysis and factor analysis. In the canonical analysis, only Roy's maximum root was <0.01. In the nested generalized linear model, the model using a standard deviation (SD) indicator fit best and some other possibilities are shown here. In DID patients, the short transition time among plural personae could guide to the risky situation such as suicide. So if the voice approach can show the time threshold of changes between the different personae, it would be useful as an Access Assessment in the form of a simple HAMA.
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
Cutoff for extensions of massive gravity and bi-gravity
NASA Astrophysics Data System (ADS)
Matas, Andrew
2016-04-01
Recently there has been interest in extending ghost-free massive gravity, bi-gravity, and multi-gravity by including non-standard kinetic terms and matter couplings. We first review recent proposals for this class of extensions, emphasizing how modifications of the kinetic and potential structure of the graviton and modifications of the coupling to matter are related. We then generalize existing no-go arguments in the metric language to the vielbein language in second-order form. We give an ADM argument to show that the most promising extensions to the kinetic term and matter coupling contain a Boulware-Deser ghost. However, as recently emphasized, we may still be able to view these extensions as effective field theories below some cutoff scale. To address this possibility, we show that there is a decoupling limit where a ghost appears for a wide class of matter couplings and kinetic terms. In particular, we show that there is a decoupling limit where the linear effective vielbein matter coupling contains a ghost. Using the insight we gain from this decoupling limit analysis, we place an upper bound on the cutoff for the linear effective vielbein coupling. This result can be generalized to new kinetic interactions in the vielbein language in second-order form. Combined with recent results, this provides a strong uniqueness argument on the form of ghost-free massive gravity, bi-gravity, and multi-gravity.
Analysis of Operating Principles with S-system Models
Lee, Yun; Chen, Po-Wei; Voit, Eberhard O.
2011-01-01
Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady-states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature. PMID:21377479
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju
2014-01-01
Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973
Inflation in a closed universe
NASA Astrophysics Data System (ADS)
Ratra, Bharat
2017-11-01
To derive a power spectrum for energy density inhomogeneities in a closed universe, we study a spatially-closed inflation-modified hot big bang model whose evolutionary history is divided into three epochs: an early slowly-rolling scalar field inflation epoch and the usual radiation and nonrelativistic matter epochs. (For our purposes it is not necessary to consider a final dark energy dominated epoch.) We derive general solutions of the relativistic linear perturbation equations in each epoch. The constants of integration in the inflation epoch solutions are determined from de Sitter invariant quantum-mechanical initial conditions in the Lorentzian section of the inflating closed de Sitter space derived from Hawking's prescription that the quantum state of the universe only include field configurations that are regular on the Euclidean (de Sitter) sphere section. The constants of integration in the radiation and matter epoch solutions are determined from joining conditions derived by requiring that the linear perturbation equations remain nonsingular at the transitions between epochs. The matter epoch power spectrum of gauge-invariant energy density inhomogeneities is not a power law, and depends on spatial wave number in the way expected for a generalization to the closed model of the standard flat-space scale-invariant power spectrum. The power spectrum we derive appears to differ from a number of other closed inflation model power spectra derived assuming different (presumably non de Sitter invariant) initial conditions.
Anomalous dielectric relaxation with linear reaction dynamics in space-dependent force fields.
Hong, Tao; Tang, Zhengming; Zhu, Huacheng
2016-12-28
The anomalous dielectric relaxation of disordered reaction with linear reaction dynamics is studied via the continuous time random walk model in the presence of space-dependent electric field. Two kinds of modified reaction-subdiffusion equations are derived for different linear reaction processes by the master equation, including the instantaneous annihilation reaction and the noninstantaneous annihilation reaction. If a constant proportion of walkers is added or removed instantaneously at the end of each step, there will be a modified reaction-subdiffusion equation with a fractional order temporal derivative operating on both the standard diffusion term and a linear reaction kinetics term. If the walkers are added or removed at a constant per capita rate during the waiting time between steps, there will be a standard linear reaction kinetics term but a fractional order temporal derivative operating on an anomalous diffusion term. The dielectric polarization is analyzed based on the Legendre polynomials and the dielectric properties of both reactions can be expressed by the effective rotational diffusion function and component concentration function, which is similar to the standard reaction-diffusion process. The results show that the effective permittivity can be used to describe the dielectric properties in these reactions if the chemical reaction time is much longer than the relaxation time.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
On Generalizations of Cochran’s Theorem and Projection Matrices.
1980-08-01
Definiteness of the Estimated Dispersion Matrix in a Multivariate Linear Model ," F. Pukelsheim and George P.H. Styan, May 1978. TECHNICAL REPORTS...with applications to the analysis of covariance," Proc. Cambridge Philos. Soc., 30, pp. 178-191. Graybill , F. A. and Marsaglia, G. (1957...34Idempotent matrices and quad- ratic forms in the general linear hypothesis," Ann. Math. Statist., 28, pp. 678-686. Greub, W. (1975). Linear Algebra (4th ed
Computing Linear Mathematical Models Of Aircraft
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1991-01-01
Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.
Buchdahl, R.; Parker, A.; Stebbings, T.; Babiker, A.
1996-01-01
OBJECTIVE--To examine the association between the air pollutants ozone, sulphur dioxide, and nitrogen dioxide and the incidence of acute childhood wheezy episodes. DESIGN--Prospective observational study over one year. SETTING--District general hospital. SUBJECTS--1025 children attending the accident and emergency department with acute wheezy episodes; 4285 children with other conditions as the control group. MAIN OUTCOME MEASURES--Daily incidence of acute wheezy episodes. RESULTS--After seasonal adjustment, day to day variations in daily average concentrations of ozone and sulphur dioxide were found to have significant associations with the incidence of acute wheezy episodes. The strongest association was with ozone, for which a non-linear U shaped relation was seen. In terms of the incidence rate ratio (1 at a mean 24 hour ozone concentration of 40 microg/m3 (SD=19.1)), children were more likely to attend when the concentration was two standard deviations below the mean (incidence rate ratio=3.01; 95% confidence interval 2.17 to 4.18) or two standard deviations above the mean (1.34; 1.09 to 1.66). Sulphur dioxide had a weaker log-linear relation with incidence (1.12; 1.05 to 1.19 for each standard deviation (14.1) increase in sulphur dioxide concentration). Further adjustment for temperature and wind speed did not significantly alter these associations. CONCLUSIONS--Independent of season, temperature, and wind speed, fluctuations in concentrations of atmospheric ozone and sulphur dioxide are strongly associated with patterns of attendance at accident and emergency departments for acute childhood wheezy episodes. A critical ozone concentration seems to exist in the atmosphere above or below which children are more likely to develop symptoms. PMID:8597731
NASA Astrophysics Data System (ADS)
Zhang, Yu-ying; Wang, Meng-jie; Chang, Chun-ran; Xu, Kang-zhen; Ma, Hai-xia; Zhao, Feng-qi
2018-05-01
The standard thermite reaction enthalpies (ΔrHmθ) for seven metal oxides were theoretically analyzed using density functional theory (DFT) under five different functional levels, and the results were compared with experimental values. Through the comparison of the linear fitting constants, mean error and root mean square error, the Perdew-Wang functional within the framework of local density approximation (LDA-PWC) and Perdew-Burke-Ernzerhof exchange-correlation functional within the framework of generalized gradient approximation (GGA-PBE) were selected to further calculate the thermite reaction enthalpies for metal composite oxides (MCOs). According to the Kirchhoff formula, the standard molar reaction enthalpies for these MCOs were obtained and their standard molar enthalpies of formation (ΔfHmθ) were finally calculated. The results indicated that GGA-PBE is the most suitable one out of the total five methods to calculate these oxides. Tungstate crystals present the maximum deviation of the enthalpies of thermite reactions for MCOs and these of their physical metal oxide mixtures, but ferrite crystals are the minimum. The correlation coefficients are all above 0.95, meaning linear fitting results are very precise. And the molar enthalpies of formation for NiMoO4, CuMoO4, PbZrO3 (Pm/3m), PbZrO3 (PBA2), PbZrO3 (PBam), MgZrO3, CdZrO3, MnZrO3, CuWO4 and Fe2WO6 were first obtained as -1078.75, -1058.45, -1343.87, -1266.54, -1342.29, -1333.03, -1210.43, -1388.05, -1131.07 and - 1860.11 kJ·mol-1, respectively.
Rossi, Omar; Maggiore, Luana; Necchi, Francesca; Koeberling, Oliver; MacLennan, Calman A; Saul, Allan; Gerke, Christiane
2015-01-01
Genetically induced outer membrane particles from Gram-negative bacteria, called Generalized Modules for Membrane Antigens (GMMA), are being investigated as vaccines. Rapid methods are required for estimating the protein content for in-process assays during production. Since GMMA are complex biological structures containing lipid and polysaccharide as well as protein, protein determinations are not necessarily straightforward. We compared protein quantification by Bradford, Lowry, and Non-Interfering assays using bovine serum albumin (BSA) as standard with quantitative amino acid (AA) analysis, the most accurate currently available method for protein quantification. The Lowry assay has the lowest inter- and intra-assay variation and gives the best linearity between protein amount and absorbance. In all three assays, the color yield (optical density per mass of protein) of GMMA was markedly different from that of BSA with a ratio of approximately 4 for the Bradford assay, and highly variable between different GMMA; and approximately 0.7 for the Lowry and Non-Interfering assays, highlighting the need for calibrating the standard used in the colorimetric assay against GMMA quantified by AA analysis. In terms of a combination of ease, reproducibility, and proportionality of protein measurement, and comparability between samples, the Lowry assay was superior to Bradford and Non-Interfering assays for GMMA quantification.
Generalized massive optimal data compression
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.
Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.
Trninić, Marko; Jeličić, Mario; Papić, Vladan
2015-07-01
In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.
NASA Astrophysics Data System (ADS)
Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing
2004-12-01
The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.
NASA Astrophysics Data System (ADS)
Kaplan, Melike; Hosseini, Kamyar; Samadani, Farzan; Raza, Nauman
2018-07-01
A wide range of problems in different fields of the applied sciences especially non-linear optics is described by non-linear Schrödinger's equations (NLSEs). In the present paper, a specific type of NLSEs known as the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term has been studied. The generalized Kudryashov method along with symbolic computation package has been exerted to carry out this objective. As a consequence, a series of optical soliton solutions have formally been retrieved. It is corroborated that the generalized form of Kudryashov method is a direct, effectual, and reliable technique to deal with various types of non-linear Schrödinger's equations.
Libraries for Software Use on Peregrine | High-Performance Computing | NREL
-specific libraries. Libraries List Name Description BLAS Basic Linear Algebra Subroutines, libraries only managing hierarchically structured data. LAPACK Standard Netlib offering for computational linear algebra
Linearization: Students Forget the Operating Point
ERIC Educational Resources Information Center
Roubal, J.; Husek, P.; Stecha, J.
2010-01-01
Linearization is a standard part of modeling and control design theory for a class of nonlinear dynamical systems taught in basic undergraduate courses. Although linearization is a straight-line methodology, it is not applied correctly by many students since they often forget to keep the operating point in mind. This paper explains the topic and…
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
Schwarz maps of algebraic linear ordinary differential equations
NASA Astrophysics Data System (ADS)
Sanabria Malagón, Camilo
2017-12-01
A linear ordinary differential equation is called algebraic if all its solution are algebraic over its field of definition. In this paper we solve the problem of finding closed form solution to algebraic linear ordinary differential equations in terms of standard equations. Furthermore, we obtain a method to compute all algebraic linear ordinary differential equations with rational coefficients by studying their associated Schwarz map through the Picard-Vessiot Theory.
NASA Astrophysics Data System (ADS)
McDonald, Michael C.; Kim, H. K.; Henry, J. R.; Cunningham, I. A.
2012-03-01
The detective quantum efficiency (DQE) is widely accepted as a primary measure of x-ray detector performance in the scientific community. A standard method for measuring the DQE, based on IEC 62220-1, requires the system to have a linear response meaning that the detector output signals are proportional to the incident x-ray exposure. However, many systems have a non-linear response due to characteristics of the detector, or post processing of the detector signals, that cannot be disabled and may involve unknown algorithms considered proprietary by the manufacturer. For these reasons, the DQE has not been considered as a practical candidate for routine quality assurance testing in a clinical setting. In this article we described a method that can be used to measure the DQE of both linear and non-linear systems that employ only linear image processing algorithms. The method was validated on a Cesium Iodide based flat panel system that simultaneously stores a raw (linear) and processed (non-linear) image for each exposure. It was found that the resulting DQE was equivalent to a conventional standards-compliant DQE with measurement precision, and the gray-scale inversion and linear edge enhancement did not affect the DQE result. While not IEC 62220-1 compliant, it may be adequate for QA programs.
The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models
1988-07-27
auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Constructing exact perturbations of the standard cosmological models
NASA Astrophysics Data System (ADS)
Sopuerta, Carlos F.
1999-11-01
In this paper we show a procedure to construct cosmological models which, according to a covariant criterion, can be seen as exact (nonlinear) perturbations of the standard Friedmann-Lemaı⁁tre-Robertson-Walker (FLRW) cosmological models. The special properties of this procedure will allow us to select some of the characteristics of the models and also to study in depth their main geometrical and physical features. In particular, the models are conformally stationary, which means that they are compatible with the existence of isotropic radiation, and the observers that would measure this isotropy are rotating. Moreover, these models have two arbitrary functions (one of them is a complex function) which control their main properties, and in general they do not have any isometry. We study two examples, focusing on the case when the underlying FLRW models are flat dust models. In these examples we compare our results with those of the linearized theory of perturbations about a FLRW background.
A new exact method for line radiative transfer
NASA Astrophysics Data System (ADS)
Elitzur, Moshe; Asensio Ramos, Andrés
2006-01-01
We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.
Space Flyable Hg(sup +) Frequency Standards
NASA Technical Reports Server (NTRS)
Prestage, John D.; Maleki, Lute
1994-01-01
We discuss a design for a space based atomic frequency standard (AFS) based on Hg(sup +) ions confined in a linear ion trap. This newly developed AFS should be well suited for space borne applications because it can supply the ultra-high stability of a H-maser but its total mass is comparable to that of a NAVSTAR/GPS cesium clock, i.e., about 11kg. This paper will compare the proposed Hg(sup +) AFS to the present day GPS cesium standards to arrive at the 11 kg mass estimate. The proposed space borne Hg(sup +) standard is based upon the recently developed extended linear ion trap architecture which has reduced the size of existing trapped Hg(sup +) standards to a physics package which is comparable in size to a cesium beam tube. The demonstrated frequency stability to below 10(sup -15) of existing Hg(sup +) standards should be maintained or even improved upon in this new architecture. This clock would deliver far more frequency stability per kilogram than any current day space qualified standard.
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Analysis of the PLL phase error in presence of simulated ionospheric scintillation events
NASA Astrophysics Data System (ADS)
Forte, B.
2012-01-01
The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).
Gupta, Sandesh K; Jain, Amit; Bednarek, Daniel R; Rudin, Stephen
2011-01-01
In this study, we evaluated the imaging characteristics of the high-resolution, high-sensitivity micro-angiographic fluoroscope (MAF) with 35-micron pixel-pitch when used with different commercially-available 300 micron thick phosphors: the high resolution (HR) and high light (HL) from Hamamatsu. The purpose of this evaluation was to see if the HL phosphor with its higher screen efficiency could be replaced with the HR phosphor to achieve improved resolution without an increase in noise resulting from the HR's decreased light-photon yield. We designated the detectors MAF-HR and MAF-HL and compared them with a standard flat panel detector (FPD) (194 micron pixel pitch and 600 micron thick CsI(Tl)). For this comparison, we used the generalized linear-system metrics of GMTF, GNNPS and GDQE which are more realistic measures of total system performance since they include the effect of scattered radiation, focal spot distribution, and geometric un-sharpness. Magnifications (1.05-1.15) and scatter fractions (0.28 and 0.33) characteristic of a standard head phantom were used. The MAF-HR performed significantly better than the MAF-HL at high spatial frequencies. The ratio of GMTF and GDQE of the MAF-HR compared to the MAF-HL at 3(6) cycles/mm was 1.45(2.42) and 1.23(2.89), respectively. Despite significant degradation by inclusion of scatter and object magnification, both MAF-HR and MAF-HL provide superior performance over the FPD at higher spatial frequencies with similar performance up to the FPD's Nyquist frequency of 2.5 cycles/mm. Both substantially higher resolution and improved GDQE can be achieved with the MAF using the HR phosphor instead of the HL phosphor.
NASA Technical Reports Server (NTRS)
Lisano, Michael E.
2007-01-01
Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to nonlinear sequential consider covariance analysis, i.e. in the presence of nonlinear dynamics and nonlinear measurements. A simple SPCF for orbit determination, exemplifying an algorithm hosted in the guidance, navigation and control (GN&C) computer processor of a hypothetical robotic spacecraft, was implemented, and compared with an identically-parameterized (standard) extended, consider-parameterized Kalman filter. The onboard filtering scenario examined is a hypothetical spacecraft orbit about a small natural body with imperfectly-known mass. The formulations, relative complexities, and performances of the filters are compared and discussed.
Conditional and unconditional Gaussian quantum dynamics
NASA Astrophysics Data System (ADS)
Genoni, Marco G.; Lami, Ludovico; Serafini, Alessio
2016-07-01
This article focuses on the general theory of open quantum systems in the Gaussian regime and explores a number of diverse ramifications and consequences of the theory. We shall first introduce the Gaussian framework in its full generality, including a classification of Gaussian (also known as 'general-dyne') quantum measurements. In doing so, we will give a compact proof for the parametrisation of the most general Gaussian completely positive map, which we believe to be missing in the existing literature. We will then move on to consider the linear coupling with a white noise bath, and derive the diffusion equations that describe the evolution of Gaussian states under such circumstances. Starting from these equations, we outline a constructive method to derive general master equations that apply outside the Gaussian regime. Next, we include the general-dyne monitoring of the environmental degrees of freedom and recover the Riccati equation for the conditional evolution of Gaussian states. Our derivation relies exclusively on the standard quantum mechanical update of the system state, through the evaluation of Gaussian overlaps. The parametrisation of the conditional dynamics we obtain is novel and, at variance with existing alternatives, directly ties in to physical detection schemes. We conclude our study with two examples of conditional dynamics that can be dealt with conveniently through our formalism, demonstrating how monitoring can suppress the noise in optical parametric processes as well as stabilise systems subject to diffusive scattering.
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.
Application of snapshot imaging spectrometer in environmental detection
NASA Astrophysics Data System (ADS)
Sun, Kai; Qin, Xiaolei; Zhang, Yu; Wang, Jinqiang
2017-10-01
This study aimed at the application of snapshot imaging spectrometer in environmental detection. The simulated sewage and dyeing wastewater were prepared and the optimal experimental conditions were determined. The white LED array was used as the detection light source and the image of the sample was collected by the imaging spectrometer developed in the laboratory to obtain the spectral information of the sample in the range of 400-800 nm. The standard curve between the absorbance and the concentration of the samples was established. The linear range of a single component of Rhoda mine B was 1-50 mg/L, the linear correlation coefficient was more than 0.99, the recovery was 93%-113% and the relative standard deviations (RSD) was 7.5%. The linear range of chemical oxygen demand (COD) standard solution was 50-900mg/L, the linear correlation coefficient was 0.981, the recovery was 91% -106% and the relative standard deviation (RSD) was 6.7%. The rapid, accurate and precise method for detecting dyes showed an excellent promise for on-site and emergency detection in environment. At the request of the proceedings editor, an updated version of this article was published on 17 October 2017. The original version of this article was replaced due to an accidental inversion of Figure 2 and Figure 3. The Figures have been corrected in the updated and republished version.
An implicit boundary integral method for computing electric potential of macromolecules in solvent
NASA Astrophysics Data System (ADS)
Zhong, Yimin; Ren, Kui; Tsai, Richard
2018-04-01
A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.
Huppert, Theodore J
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.
An inversion-based self-calibration for SIMS measurements: Application to H, F, and Cl in apatite
NASA Astrophysics Data System (ADS)
Boyce, J. W.; Eiler, J. M.
2011-12-01
Measurements of volatile abundances in igneous apatites can provide information regarding the abundances and evolution of volatiles in magmas, with applications to terrestrial volcanism and planetary evolution. Secondary ion mass spectrometry (SIMS) measurements can produce accurate and precise measurements of H and other volatiles in many materials including apatite. SIMS standardization generally makes use of empirical linear transfer functions that relate measured ion ratios to independently known concentrations. However, this approach is often limited by the lack of compositionally diverse, well-characterized, homogeneous standards. In general, SIMS calibrations are developed for minor and trace elements, and any two are treated as independent of one another. However, in crystalline materials, additional stoichiometric constraints may apply. In the case of apatite, the sum of concentrations of abundant volatile elements (H, Cl, and F) should closely approach 100% occupancy of their collective structural site. Here we propose and document the efficacy of a method for standardizing SIMS analyses of abundant volatiles in apatites that takes advantage of this stoichiometric constraint. The principle advantage of this method is that it is effectively self-standardizing; i.e., it requires no independently known homogeneous reference standards. We define a system of independent linear equations relating measured ion ratios (H/P, Cl/P, F/P) and unknown calibration slopes. Given sufficient range in the concentrations of the different elements among apatites measured in a single analytical session, solving this system of equations allows for the calibration slope for each element to be determined without standards, using only blank-corrected ion ratios. In the case that a data set of this kind lacks sufficient range in measured compositions of one or more of the relevant ion ratios, one can employ measurements of additional apatites of a variety of compositions to increase the statistical range and make the inversion more accurate and precise. These additional non-standard apatites need only be wide-ranging in composition: They need not be homogenous nor have known H, F, or Cl concentrations. Tests utilizing synthetic data and data generated in the laboratory indicate that this method should yield satisfactory results provided apatites meet the criteria of the model. The inversion method is able to reproduce conventional calibrations to within <2.5%, a level of accuracy comparable to or even better than the uncertainty of the conventional calibration, and one that includes both error in the inversion method as well as any true error in the independently determined values of the standards. Uncertainties in the inversion calibrations range from 0.1-1.7% (2σ), typically an order of magnitude smaller than the uncertainties in conventional calibrations (~4-5% for H2O, 1-19% for F and Cl). However, potential systematic errors stem from the model assumption of 100% occupancy of this site by the measured elements. Use of this method simplifies analysis of H, F, and Cl in apatites by SIMS, and may also be amenable to other stoichiometrically limited substitution groups, including P+As+S+Si+C in apatite, and Zr+Hf+U+Th in non-metamict zircon.
Implementing general quantum measurements on linear optical and solid-state qubits
NASA Astrophysics Data System (ADS)
Ota, Yukihiro; Ashhab, Sahel; Nori, Franco
2013-03-01
We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.
Linear quadratic stochastic control of atomic hydrogen masers.
Koppang, P; Leland, R
1999-01-01
Data are given showing the results of using the linear quadratic Gaussian (LQG) technique to steer remote hydrogen masers to Coordinated Universal Time (UTC) as given by the United States Naval Observatory (USNO) via two-way satellite time transfer and the Global Positioning System (GPS). Data also are shown from the results of steering a hydrogen maser to the real-time USNO mean. A general overview of the theory behind the LQG technique also is given. The LQG control is a technique that uses Kalman filtering to estimate time and frequency errors used as input into a control calculation. A discrete frequency steer is calculated by minimizing a quadratic cost function that is dependent on both the time and frequency errors and the control effort. Different penalties, chosen by the designer, are assessed by the controller as the time and frequency errors and control effort vary from zero. With this feature, controllers can be designed to force the time and frequency differences between two standards to zero, either more or less aggressively depending on the application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callister, Stephen J.; Barry, Richard C.; Adkins, Joshua N.
2006-02-01
Central tendency, linear regression, locally weighted regression, and quantile techniques were investigated for normalization of peptide abundance measurements obtained from high-throughput liquid chromatography-Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR MS). Arbitrary abundances of peptides were obtained from three sample sets, including a standard protein sample, two Deinococcus radiodurans samples taken from different growth phases, and two mouse striatum samples from control and methamphetamine-stressed mice (strain C57BL/6). The selected normalization techniques were evaluated in both the absence and presence of biological variability by estimating extraneous variability prior to and following normalization. Prior to normalization, replicate runs from each sample setmore » were observed to be statistically different, while following normalization replicate runs were no longer statistically different. Although all techniques reduced systematic bias, assigned ranks among the techniques revealed significant trends. For most LC-FTICR MS analyses, linear regression normalization ranked either first or second among the four techniques, suggesting that this technique was more generally suitable for reducing systematic biases.« less
Graph partitions and cluster synchronization in networks of oscillators
Schaub, Michael T.; O’Clery, Neave; Billeh, Yazan N.; Delvenne, Jean-Charles; Lambiotte, Renaud; Barahona, Mauricio
2017-01-01
Synchronization over networks depends strongly on the structure of the coupling between the oscillators. When the coupling presents certain regularities, the dynamics can be coarse-grained into clusters by means of External Equitable Partitions of the network graph and their associated quotient graphs. We exploit this graph-theoretical concept to study the phenomenon of cluster synchronization, in which different groups of nodes converge to distinct behaviors. We derive conditions and properties of networks in which such clustered behavior emerges, and show that the ensuing dynamics is the result of the localization of the eigenvectors of the associated graph Laplacians linked to the existence of invariant subspaces. The framework is applied to both linear and non-linear models, first for the standard case of networks with positive edges, before being generalized to the case of signed networks with both positive and negative interactions. We illustrate our results with examples of both signed and unsigned graphs for consensus dynamics and for partial synchronization of oscillator networks under the master stability function as well as Kuramoto oscillators. PMID:27781454
Arnaud, J; Chappuis, P; Zawislak, R; Houot, O; Jaudon, M C; Bienvenu, F; Bureau, F
1993-02-01
An interlaboratory collaborative trial was conducted on the determination of serum copper using two different methods, based on colorimetry (test combination Copper, Boehringer Mannheim, Mannheim, Germany) and flame atomic absorption spectrometry (FAAS). The general performance of the colorimetric method was below that of FAAS, except for sensitivity and linear range, as assessed by detection limit (0.44 versus 1.32 mumol/L) and upper limit of linearity (150 versus 50 mumol/L). The range of the between-run CVs and the recovery of standard additions were, respectively, 2.3-11.9% and 92-127% for the colorimetric method and 1.1-6.0% and 93-101% for the FAAS method. Interferences were minimal with both methods. The two techniques correlated satisfactorily (the correlation coefficients ranged from 0.945-0.970 among laboratories) but the colorimetric assay exhibited slightly higher results than the FAAS method. Each method was transferable among laboratories.
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
Symmetries and integrability of a fourth-order Euler-Bernoulli beam equation
NASA Astrophysics Data System (ADS)
Bokhari, Ashfaque H.; Mahomed, F. M.; Zaman, F. D.
2010-05-01
The complete symmetry group classification of the fourth-order Euler-Bernoulli ordinary differential equation, where the elastic modulus and the area moment of inertia are constants and the applied load is a function of the normal displacement, is obtained. We perform the Lie and Noether symmetry analysis of this problem. In the Lie analysis, the principal Lie algebra which is one dimensional extends in four cases, viz. the linear, exponential, general power law, and a negative fractional power law. It is further shown that two cases arise in the Noether classification with respect to the standard Lagrangian. That is, the linear case for which the Noether algebra dimension is one less than the Lie algebra dimension as well as the negative fractional power law. In the latter case the Noether algebra is three dimensional and is isomorphic to the Lie algebra which is sl(2,R). This exceptional case, although admitting the nonsolvable algebra sl(2,R), remarkably allows for a two-parameter family of exact solutions via the Noether integrals. The Lie reduction gives a second-order ordinary differential equation which has nonlocal symmetry.
An M-estimator for reduced-rank system identification.
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T
2017-01-15
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.
An M-estimator for reduced-rank system identification
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.
2018-01-01
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659
Quaternion-valued echo state networks.
Xia, Yili; Jahanchahi, Cyrus; Mandic, Danilo P
2015-04-01
Quaternion-valued echo state networks (QESNs) are introduced to cater for 3-D and 4-D processes, such as those observed in the context of renewable energy (3-D wind modeling) and human centered computing (3-D inertial body sensors). The introduction of QESNs is made possible by the recent emergence of quaternion nonlinear activation functions with local analytic properties, required by nonlinear gradient descent training algorithms. To make QENSs second-order optimal for the generality of quaternion signals (both circular and noncircular), we employ augmented quaternion statistics to introduce widely linear QESNs. To that end, the standard widely linear model is modified so as to suit the properties of dynamical reservoir, typically realized by recurrent neural networks. This allows for a full exploitation of second-order information in the data, contained both in the covariance and pseudocovariances, and a rigorous account of second-order noncircularity (improperness), and the corresponding power mismatch and coupling between the data components. Simulations in the prediction setting on both benchmark circular and noncircular signals and on noncircular real-world 3-D body motion data support the analysis.
Low-Loss Materials for Josephson Qubits
2014-10-09
quantum circuit. It also intuitively explains how for a linear circuit the standard results for electrical circuits are obtained, justifying the use of... linear concepts for a weakly non- linear device such as the transmon. It has also become common to use a double sided noise spectrum to represent...loss tangent of large area pad junction. (c) Effective linearized circuit for the double junction, which makes up the admittance $Y$. $L_j$ is the
Composite Linear Models | Division of Cancer Prevention
By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty
Linear systems with structure group and their feedback invariants
NASA Technical Reports Server (NTRS)
Martin, C.; Hermann, R.
1977-01-01
A general method described by Hermann and Martin (1976) for the study of the feedback invariants of linear systems is considered. It is shown that this method, which makes use of ideas of topology and algebraic geometry, is very useful in the investigation of feedback problems for which the classical methods are not suitable. The transfer function as a curve in the Grassmanian is examined. The general concepts studied in the context of specific systems and applications are organized in terms of the theory of Lie groups and algebraic geometry. Attention is given to linear systems which have a structure group, linear mechanical systems, and feedback invariants. The investigation shows that Lie group techniques are powerful and useful tools for analysis of the feedback structure of linear systems.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Posterior propriety for hierarchical models with log-likelihoods that have norm bounds
Michalak, Sarah E.; Morris, Carl N.
2015-07-17
Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
NASA Astrophysics Data System (ADS)
Bičák, Jiří; Schmidt, Josef
2016-01-01
The question of the uniqueness of energy-momentum tensors in the linearized general relativity and in the linear massive gravity is analyzed without using variational techniques. We start from a natural ansatz for the form of the tensor (for example, that it is a linear combination of the terms quadratic in the first derivatives), and require it to be conserved as a consequence of field equations. In the case of the linear gravity in a general gauge we find a four-parametric system of conserved second-rank tensors which contains a unique symmetric tensor. This turns out to be the linearized Landau-Lifshitz pseudotensor employed often in full general relativity. We elucidate the relation of the four-parametric system to the expression proposed recently by Butcher et al. "on physical grounds" in harmonic gauge, and we show that the results coincide in the case of high-frequency waves in vacuum after a suitable averaging. In the massive gravity we show how one can arrive at the expression which coincides with the "generalized linear symmetric Landau-Lifshitz" tensor. However, there exists another uniquely given simpler symmetric tensor which can be obtained by adding the divergence of a suitable superpotential to the canonical energy-momentum tensor following from the Fierz-Pauli action. In contrast to the symmetric tensor derived by the Belinfante procedure which involves the second derivatives of the field variables, this expression contains only the field and its first derivatives. It is simpler than the generalized Landau-Lifshitz tensor but both yield the same total quantities since they differ by the divergence of a superpotential. We also discuss the role of the gauge conditions in the proofs of the uniqueness. In the Appendix, the symbolic tensor manipulation software cadabra is briefly described. It is very effective in obtaining various results which would otherwise require lengthy calculations.
Feingold, Alan
2009-01-01
The use of growth-modeling analysis (GMA)--including Hierarchical Linear Models, Latent Growth Models, and General Estimating Equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the intervention and control groups that captures the treatment effect is rarely reported. This article first reviews two classes of formulas for effect sizes associated with classical repeated-measures designs that use the standard deviation of either change scores or raw scores for the denominator. It then broadens the scope to subsume GMA, and demonstrates that the independent groups, within-subjects, pretest-posttest control-group, and GMA designs all estimate the same effect size when the standard deviation of raw scores is uniformly used. Finally, it is shown that the correct effect size for treatment efficacy in GMA--the difference between the estimated means of the two groups at end of study (determined from the coefficient for the slope difference and length of study) divided by the baseline standard deviation--is not reported in clinical trials. PMID:19271847
Stability analysis of spacecraft power systems
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.; Sheble, G. B.; Nelms, R. M.
1990-01-01
The problems in applying standard electric utility models, analyses, and algorithms to the study of the stability of spacecraft power conditioning and distribution systems are discussed. Both single-phase and three-phase systems are considered. Of particular concern are the load and generator models that are used in terrestrial power system studies, as well as the standard assumptions of load and topological balance that lead to the use of the positive sequence network. The standard assumptions regarding relative speeds of subsystem dynamic responses that are made in the classical transient stability algorithm, which forms the backbone of utility-based studies, are examined. The applicability of these assumptions to a spacecraft power system stability study is discussed in detail. In addition to the classical indirect method, the applicability of Liapunov's direct methods to the stability determination of spacecraft power systems is discussed. It is pointed out that while the proposed method uses a solution process similar to the classical algorithm, the models used for the sources, loads, and networks are, in general, more accurate. Some preliminary results are given for a linear-graph, state-variable-based modeling approach to the study of the stability of space-based power distribution networks.
Standard electrode potential, Tafel equation, and the solvation thermodynamics.
Matyushov, Dmitry V
2009-06-21
Equilibrium in the electronic subsystem across the solution-metal interface is considered to connect the standard electrode potential to the statistics of localized electronic states in solution. We argue that a correct derivation of the Nernst equation for the electrode potential requires a careful separation of the relevant time scales. An equation for the standard metal potential is derived linking it to the thermodynamics of solvation. The Anderson-Newns model for electronic delocalization between the solution and the electrode is combined with a bilinear model of solute-solvent coupling introducing nonlinear solvation into the theory of heterogeneous electron transfer. We therefore are capable of addressing the question of how nonlinear solvation affects electrochemical observables. The transfer coefficient of electrode kinetics is shown to be equal to the derivative of the free energy, or generalized force, required to shift the unoccupied electronic level in the bulk. The transfer coefficient thus directly quantifies the extent of nonlinear solvation of the redox couple. The current model allows the transfer coefficient to deviate from the value of 0.5 of the linear solvation models at zero electrode overpotential. The electrode current curves become asymmetric in respect to the change in the sign of the electrode overpotential.
NASA Astrophysics Data System (ADS)
Kanisch, G.
2017-05-01
The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.
Thermodynamic properties of 5(nitrophenyl) furan-2-carbaldehyde isomers.
Dibrivnyi, Volodymyr; Sobechko, Iryna; Puniak, Marian; Horak, Yuriy; Obushak, Mykola; Van-Chin-Syan, Yuriy; Andriy, Marshalek; Velychkivska, Nadiia
2015-01-01
The aim of the current work was to determine thermo dynamical properties of 5(2-nitro phenyl)-furan-2-carbaldehyde, 5(3-nitro phenyl)-furan-2-carbaldehyde and 5(4-nitro phenyl)-furan-2-carbaldehyde. The temperature dependence of saturated vapor pressure of 5(2-nitro phenyl)-furan-2-carbaldehyde, 5(3-nitro phenyl)-furan-2-carbaldehyde and 5(4-nitro phenyl)-furan-2-carbaldehyde was determined by Knudsen's effusion method. The results are presented by the Clapeyron-Clausius equation in linear form, and via this form, the standard enthalpies, entropies and Gibbs energies of sublimation and evaporation of compounds were calculated at 298.15 K. The standard molar formation enthalpies of compounds in crystalline state at 298.15 K were determined indirectly by the corresponding standard molar combustion enthalpy, obtained using bomb calorimetry combustion. Determination of the thermodynamic properties for these compounds may contribute to solving practical problems pertaining optimization processes of their synthesis, purification and application and it will also provide a more thorough insight regarding the theoretical knowledge of their nature.Graphical abstract:Generalized structural formula of investigated compounds and their formation enthalpy determination scheme in the gaseous state.
Grol, R
1990-01-01
The Nederlands Huisartsen Genootschap (NHG), the college of general practitioners in the Netherlands, has begun a national programme of standard setting for the quality of care in general practice. When the standards have been drawn up and assessed they are disseminated via the journal Huisarts en Wetenschap. In a survey, carried out among a randomized sample of 10% of all general practitioners, attitudes towards national standard setting in general and to the first set of standards (diabetes care) were studied. The response was 70% (453 doctors). A majority of the respondents said they were well informed about the national standard setting initiatives instigated by the NHG (71%) and about the content of the first standards (77%). The general practitioners had a positive attitude towards the setting of national standards for quality of care, and this was particularly true for doctors who were members of the NHG. Although a large majority of doctors said they agreed with most of the guidelines in the diabetes standards fewer respondents were actually working to the guidelines and some of the standards are certain to meet with a lot of resistance. A better knowledge of the standards and a more positive attitude to the process of national standard setting correlated with a more positive attitude to the guidelines formulated in the diabetes standards. The results could serve as a starting point for an exchange of views about standard setting in general practice in other countries. PMID:2265001
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Asymptotic aspect of derivations in Banach algebras.
Roh, Jaiok; Chang, Ick-Soon
2017-01-01
We prove that every approximate linear left derivation on a semisimple Banach algebra is continuous. Also, we consider linear derivations on Banach algebras and we first study the conditions for a linear derivation on a Banach algebra. Then we examine the functional inequalities related to a linear derivation and their stability. We finally take central linear derivations with radical ranges on semiprime Banach algebras and a continuous linear generalized left derivation on a semisimple Banach algebra.
Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig
2016-10-01
To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
NASA Astrophysics Data System (ADS)
Salazar, William
2003-01-01
The Standard Advanced Dewar Assembly (SADA) is the critical module in the Department of Defense (DoD) standardization effort of scanning second-generation thermal imaging systems. DoD has established a family of SADA's to address requirements for high performance (SADA I), mid-to-high performance (SADA II), and compact class (SADA III) systems. SADA's consist of the Infrared Focal Plane Array (IRFPA), Dewar, Command and Control Electronics (C&CE), and the cryogenic cooler. SADA's are used in weapons systems such as Comanche and Apache helicopters, the M1 Abrams Tank, the M2 Bradley Fighting Vehicle, the Line of Sight Antitank (LOSAT) system, the Improved Target Acquisition System (ITAS), and Javelin's Command Launch Unit (CLU). DOD has defined a family of tactical linear drive coolers in support of the family of SADA's. The Stirling linear drive cryo-coolers are utilized to cool the SADA's Infrared Focal Plane Arrays (IRFPAs) to their operating cryogenic temperatures. These linear drive coolers are required to meet strict cool-down time requirements along with lower vibration output, lower audible noise, and higher reliability than currently fielded rotary coolers. This paper will (1) outline the characteristics of each cooler, (2) present the status and results of qualification tests, and (3) present the status and test results of efforts to increase linear drive cooler reliability.
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
General linear methods and friends: Toward efficient solutions of multiphysics problems
NASA Astrophysics Data System (ADS)
Sandu, Adrian
2017-07-01
Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..
An Application to the Prediction of LOD Change Based on General Regression Neural Network
NASA Astrophysics Data System (ADS)
Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.
2011-07-01
Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.
NASA Astrophysics Data System (ADS)
Lee, Hyunho; Jeong, Seonghoon; Jo, Yunhui; Yoon, Myonggeun
2015-07-01
Quality assurance (QA) for medical linear accelerators is indispensable for appropriate cancer treatment. Some international organizations and advanced Western countries have provided QA guidelines for linear accelerators. Currently, QA regulations for linear accelerators in Korean hospitals specify a system in which each hospital stipulates its independent hospital-based protocols for QA procedures (HP_QAPs) and conducts QA based on those HP_QAPs while regulatory authorities verify whether items under those HP_QAPs have been performed. However, because this regulatory method cannot guarantee the quality of universal treatment and QA items with tolerance criteria are different in many hospitals, the presentation of standardized QA items and tolerance criteria is essential. In this study, QA items in HP_QAPs from various hospitals and those presented by international organizations, such as the International Atomic Energy Agency, the European Union, and the American Association of Physicist in Medicine, and by advanced Western countries, such as the USA, the UK, and Canada, were compared. Concordance rates between QA items for linear accelerators that were presented by the aforementioned organizations and those currently being implemented in Korean hospitals were shown to exhibit a daily QA of 50%, a weekly QA of 22%, a monthly QA of 43%, and an annual QA of 65%, and the overall concordance rates of all QA items were approximately 48%. In the comparison between QA items being implemented in Korean hospitals and those being implemented in advanced Western countries, concordance rates were shown to exhibit a daily QA of 50%, a weekly QA of 33%, a monthly QA of 60%, and an annual QA of 67%, and the overall concordance rates of all QA items were approximately 57%. The results of this study indicate that the HP_QAPs currently implemented by Korean hospitals as QA standards for linear accelerators used in radiation therapy do not meet international standards. If this problem is to be solved, national standardized QA items and procedures for linear accelerators need to be developed.
Single toxin dose-response models revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demidenko, Eugene, E-mail: eugened@dartmouth.edu
The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the fourmore » models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.« less
Partial polygon pruning of hydrographic features in automated generalization
Stum, Alexander K.; Buttenfield, Barbara P.; Stanislawski, Larry V.
2017-01-01
This paper demonstrates a working method to automatically detect and prune portions of waterbody polygons to support creation of a multi-scale hydrographic database. Water features are known to be sensitive to scale change; and thus multiple representations are required to maintain visual and geographic logic at smaller scales. Partial pruning of polygonal features—such as long and sinuous reservoir arms, stream channels that are too narrow at the target scale, and islands that begin to coalesce—entails concurrent management of the length and width of polygonal features as well as integrating pruned polygons with other generalized point and linear hydrographic features to maintain stream network connectivity. The implementation follows data representation standards developed by the U.S. Geological Survey (USGS) for the National Hydrography Dataset (NHD). Portions of polygonal rivers, streams, and canals are automatically characterized for width, length, and connectivity. This paper describes an algorithm for automatic detection and subsequent processing, and shows results for a sample of NHD subbasins in different landscape conditions in the United States.
Quantum demolition filtering and optimal control of unstable systems.
Belavkin, V P
2012-11-28
A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one.
Generalized hydrodynamics and non-equilibrium steady states in integrable many-body quantum systems
NASA Astrophysics Data System (ADS)
Vasseur, Romain; Bulchandani, Vir; Karrasch, Christoph; Moore, Joel
The long-time dynamics of thermalizing many-body quantum systems can typically be described in terms of a conventional hydrodynamics picture that results from the decay of all but a few slow modes associated with standard conservation laws (such as particle number, energy, or momentum). However, hydrodynamics is expected to fail for integrable systems that are characterized by an infinite number of conservation laws, leading to unconventional transport properties and to complex non-equilibrium states beyond the traditional dogma of statistical mechanics. In this talk, I will describe recent attempts to understand such stationary states far from equilibrium using a generalized hydrodynamics picture. I will discuss the consistency of ``Bethe-Boltzmann'' kinetic equations with linear response Drude weights and with density-matrix renormalization group calculations. This work was supported by the Department of Energy through the Quantum Materials program (R. V.), NSF DMR-1206515, AFOSR MURI and a Simons Investigatorship (J. E. M.), DFG through the Emmy Noether program KA 3360/2-1 (C. K.).
4-wave dynamics in kinetic wave turbulence
NASA Astrophysics Data System (ADS)
Chibbaro, Sergio; Dematteis, Giovanni; Rondoni, Lamberto
2018-01-01
A general Hamiltonian wave system with quartic resonances is considered, in the standard kinetic limit of a continuum of weakly interacting dispersive waves with random phases. The evolution equation for the multimode characteristic function Z is obtained within an ;interaction representation; and a perturbation expansion in the small nonlinearity parameter. A frequency renormalization is performed to remove linear terms that do not appear in the 3-wave case. Feynman-Wyld diagrams are used to average over phases, leading to a first order differential evolution equation for Z. A hierarchy of equations, analogous to the Boltzmann hierarchy for low density gases is derived, which preserves in time the property of random phases and amplitudes. This amounts to a general formalism for both the N-mode and the 1-mode PDF equations for 4-wave turbulent systems, suitable for numerical simulations and for investigating intermittency. Some of the main results which are developed here in detail have been tested numerically in a recent work.
TI-59 Programs for Multiple Regression.
1980-05-01
general linear hypothesis model of full rank [ Graybill , 19611 can be written as Y = x 8 + C , s-N(O,o 2I) nxl nxk kxl nxl where Y is the vector of n...a "reduced model " solution, and confidence intervals for linear functions of the coefficients can be obtained using (x’x) and a2, based on the t...O107)l UA.LLL. Library ModuIe NASTER -Puter 0NTINA Cards 1 PROGRAM DESCRIPTION (s s 2 ror the general linear hypothesis model Y - XO + C’ calculates
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Aliyu, Aliyu Isa; Yusuf, Abdullahi; Baleanu, Dumitru
2017-12-01
This paper addresses the nonlinear Schrödinger type equation (NLSE) in (2+1)-dimensions which describes the nonlinear spin dynamics of Heisenberg ferromagnetic spin chains (HFSC) with anisotropic and bilinear interactions in the semiclassical limit. Two integration schemes are employed to study the equation. These are the complex envelope function ansatz and the generalized tanh methods. Dark, dark-bright or combined optical and singular soliton solutions of the equation are derived. Furthermore, the modulational instability (MI) is studied based on the standard linear-stability analysis and the MI gain is got. Numerical simulation of the obtained results are analyzed with interesting figures showing the physical meaning of the solutions.
Optimized parameter estimation in the presence of collective phase noise
NASA Astrophysics Data System (ADS)
Altenburg, Sanah; Wölk, Sabine; Tóth, Géza; Gühne, Otfried
2016-11-01
We investigate phase and frequency estimation with different measurement strategies under the effect of collective phase noise. First, we consider the standard linear estimation scheme and present an experimentally realizable optimization of the initial probe states by collective rotations. We identify the optimal rotation angle for different measurement times. Second, we show that subshot noise sensitivity—up to the Heisenberg limit—can be reached in presence of collective phase noise by using differential interferometry, where one part of the system is used to monitor the noise. For this, not only Greenberger-Horne-Zeilinger states but also symmetric Dicke states are suitable. We investigate the optimal splitting for a general symmetric Dicke state at both inputs and discuss possible experimental realizations of differential interferometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe
2010-03-31
GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less
Microgravity vibration isolation: Optimal preview and feedback control
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Grodsinsky, C. M.; Allaire, P. E.; Lewis, D. W.
1992-01-01
In order to achieve adequate low-frequency vibration isolation for certain space experiments an active control is needed, due to inherent passive-isolator limitations. Proposed here are five possible state-space models for a one-dimensional vibration isolation system with a quadratic performance index. The five models are subsets of a general set of nonhomogeneous state space equations which includes disturbance terms. An optimal control is determined, using a differential equations approach, for this class of problems. This control is expressed in terms of constant, Linear Quadratic Regulator (LQR) feedback gains and constant feedforward (preview) gains. The gains can be easily determined numerically. They result in a robust controller and offers substantial improvements over a control that uses standard LQR feedback alone.
A General Linear Model Approach to Adjusting the Cumulative GPA.
ERIC Educational Resources Information Center
Young, John W.
A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…
ERIC Educational Resources Information Center
Chen, Haiwen
2012-01-01
In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…
Yang, Qichun; Zhang, Xuesong; Xu, Xingya; ...
2017-05-29
Riverine carbon cycling is an important, but insufficiently investigated component of the global carbon cycle. Analyses of environmental controls on riverine carbon cycling are critical for improved understanding of mechanisms regulating carbon processing and storage along the terrestrial-aquatic continuum. Here, we compile and analyze riverine dissolved organic carbon (DOC) concentration data from 1402 United States Geological Survey (USGS) gauge stations to examine the spatial variability and environmental controls of DOC concentrations in the United States (U.S.) surface waters. DOC concentrations exhibit high spatial variability, with an average of 6.42 ± 6.47 mg C/ L (Mean ± Standard Deviation). In general,more » high DOC concentrations occur in the Upper Mississippi River basin and the Southeastern U.S., while low concentrations are mainly distributed in the Western U.S. Single-factor analysis indicates that slope of drainage areas, wetlands, forests, percentage of first-order streams, and instream nutrients (such as nitrogen and phosphorus) pronouncedly influence DOC concentrations, but the explanatory power of each bivariate model is lower than 35%. Analyses based on the general multi-linear regression models suggest DOC concentrations are jointly impacted by multiple factors. Soil properties mainly show positive correlations with DOC concentrations; forest and shrub lands have positive correlations with DOC concentrations, but urban area and croplands demonstrate negative impacts; total instream phosphorus and dam density correlate positively with DOC concentrations. Notably, the relative importance of these environmental controls varies substantially across major U.S. water resource regions. In addition, DOC concentrations and environmental controls also show significant variability from small streams to large rivers, which may be caused by changing carbon sources and removal rates by river orders. In sum, our results reveal that general multi-linear regression analysis of twenty one terrestrial and aquatic environmental factors can partially explain (56%) the DOC concentration variation. In conclusion, this study highlights the complexity of the interactions among these environmental factors in determining DOC concentrations, thus calls for processes-based, non-linear methodologies to constrain uncertainties in riverine DOC cycling.« less
Solution of the Generalized Noah's Ark Problem.
Billionnet, Alain
2013-01-01
The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%.
Ideal Standards, Acceptance, and Relationship Satisfaction: Latitudes of Differential Effects
Buyukcan-Tetik, Asuman; Campbell, Lorne; Finkenauer, Catrin; Karremans, Johan C.; Kappen, Gesa
2017-01-01
We examined whether the relations of consistency between ideal standards and perceptions of a current romantic partner with partner acceptance and relationship satisfaction level off, or decelerate, above a threshold. We tested our hypothesis using a 3-year longitudinal data set collected from heterosexual newlywed couples. We used two indicators of consistency: pattern correspondence (within-person correlation between ideal standards and perceived partner ratings) and mean-level match (difference between ideal standards score and perceived partner score). Our results revealed that pattern correspondence had no relation with partner acceptance, but a positive linear/exponential association with relationship satisfaction. Mean-level match had a significant positive association with actor’s acceptance and relationship satisfaction up to the point where perceived partner score equaled ideal standards score. Partner effects did not show a consistent pattern. The results suggest that the consistency between ideal standards and perceived partner attributes has a non-linear association with acceptance and relationship satisfaction, although the results were more conclusive for mean-level match. PMID:29033876
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping
2014-05-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.
Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.
ERIC Educational Resources Information Center
Sands, William A.
1978-01-01
Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)
Finite-time H∞ filtering for non-linear stochastic systems
NASA Astrophysics Data System (ADS)
Hou, Mingzhe; Deng, Zongquan; Duan, Guangren
2016-09-01
This paper describes the robust H∞ filtering analysis and the synthesis of general non-linear stochastic systems with finite settling time. We assume that the system dynamic is modelled by Itô-type stochastic differential equations of which the state and the measurement are corrupted by state-dependent noises and exogenous disturbances. A sufficient condition for non-linear stochastic systems to have the finite-time H∞ performance with gain less than or equal to a prescribed positive number is established in terms of a certain Hamilton-Jacobi inequality. Based on this result, the existence of a finite-time H∞ filter is given for the general non-linear stochastic system by a second-order non-linear partial differential inequality, and the filter can be obtained by solving this inequality. The effectiveness of the obtained result is illustrated by a numerical example.
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
41 CFR 50-204.2 - General safety and health standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true General safety and health... Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS General Safety and Health Standards § 50-204.2 General safety and health standards. (a) Every...
Kim, Dong Jin; Kim, Wook; Lee, Jun Hyun
2017-08-01
Intra-corporeal esophagojejunostomy (EJ) using a linear stapler creates a stapler entry hole that requires secure closure during the totally laparoscopic total gastrectomy (TLTG) procedure for gastric cancer. Since a standard method has not been established yet, the feasibility of using V-loc 180 (Covidien, Mansfield, MA, USA) suture material was evaluated in this study. During January 2012 to March 2015, 25 patients who underwent linear stapling EJ and V-loc 180 closure of remaining enterotomy were included in this study. Basic clinico-pathological characteristics, surgical outcomes, and short-term complications were analyzed. The mean patient age was 60.4 ± 8.5 years. Nineteen males and six females were included in this study. The mean body mass index was 25.3 ± 2.3 kg/m 2 . There were 22 stage-I, 2 stage-II, and 1 stage-III gastric cancer patients. The mean operation time was 240.5 ± 44.6 min, and the time for anastomosis was 38.8 ± 11.2 min. The procedures were successfully performed in all cases without any intra-operative complications. There was one case of EJ leakage that occurred at the corner of EJ staple line and not at the enterotomy closure site. The closure of the remaining enterotomy site using V-loc 180 suture following linear stapler EJ is technically feasible and safe during the TLTG procedure. However, further experience and results from other surgeons are necessary to generalize this procedure.
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Lead-lag relationships between stock and market risk within linear response theory
NASA Astrophysics Data System (ADS)
Borysov, Stanislav; Balatsky, Alexander
2015-03-01
We study historical correlations and lead-lag relationships between individual stock risks (standard deviation of daily stock returns) and market risk (standard deviation of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over stocks, using historical stock prices from the Standard & Poor's 500 index for 1994-2013. The observed historical dynamics suggests that the dependence between the risks was almost linear during the US stock market downturn of 2002 and after the US housing bubble in 2007, remaining at that level until 2013. Moreover, the averaged cross-correlation function often had an asymmetric shape with respect to zero lag in the periods of high correlation. We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when individual stock risks affect market risk and vice versa. This work was supported by VR 621-2012-2983.
Peterson, David; Stofleth, Jerome H.; Saul, Venner W.
2017-07-11
Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.
Genetic parameters for racing records in trotters using linear and generalized linear models.
Suontama, M; van der Werf, J H J; Juga, J; Ojala, M
2012-09-01
Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.
Seasonal control skylight glazing panel with passive solar energy switching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, J.V.
1983-10-25
A substantially transparent one-piece glazing panel is provided for generally horizontal mounting in a skylight. The panel is comprised of an repeated pattern of two alternating and contiguous linear optical elements; a first optical element being an upstanding generally right-triangular linear prism, and the second optical element being an upward-facing plano-cylindrical lens in which the planar surface is reflectively opaque and is generally in the same plane as the base of the triangular prism.
Liang, Gaozhen; Dong, Chunwang; Hu, Bin; Zhu, Hongkai; Yuan, Haibo; Jiang, Yongwen; Hao, Guoshuang
2018-05-18
Withering is the first step in the processing of congou black tea. With respect to the deficiency of traditional water content detection methods, a machine vision based NDT (Non Destructive Testing) method was established to detect the moisture content of withered leaves. First, according to the time sequences using computer visual system collected visible light images of tea leaf surfaces, and color and texture characteristics are extracted through the spatial changes of colors. Then quantitative prediction models for moisture content detection of withered tea leaves was established through linear PLS (Partial Least Squares) and non-linear SVM (Support Vector Machine). The results showed correlation coefficients higher than 0.8 between the water contents and green component mean value (G), lightness component mean value (L * ) and uniformity (U), which means that the extracted characteristics have great potential to predict the water contents. The performance parameters as correlation coefficient of prediction set (Rp), root-mean-square error of prediction (RMSEP), and relative standard deviation (RPD) of the SVM prediction model are 0.9314, 0.0411 and 1.8004, respectively. The non-linear modeling method can better describe the quantitative analytical relations between the image and water content. With superior generalization and robustness, the method would provide a new train of thought and theoretical basis for the online water content monitoring technology of automated production of black tea.
An improved method for polarimetric image restoration in interferometry
NASA Astrophysics Data System (ADS)
Pratley, Luke; Johnston-Hollitt, Melanie
2016-11-01
Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.
Liquid-chromatographic determination of cephalosporins and chloramphenicol in serum.
Danzer, L A
1983-05-01
A "high-performance" liquid-chromatographic technique involving a radial compression module is used for measuring chloramphenicol and five cephalosporin antibiotics: cefotaxime, cefoxitin, cephapirin, and cefamandol. Serum proteins are precipitated with acetonitrile solution containing 4'-nitroacetanilide as the internal standard. The drugs are eluted with a mobile phase of methanol/acetate buffer (30/70 by vol), pH 5.5. Absorbance of the cephalosporins is monitored at 254 nm. Standard curves are linear to at least 100 mg/L. The absorbance of chloramphenicol is monitored at 254 nm and 280 nm, and its standard curve is linear to at least 50 mg/L. The elution times for various other drugs were also determined, to check for potential interferents.
Asymptotic structure of space-time with a positive cosmological constant
NASA Astrophysics Data System (ADS)
Kesavan, Aruna
In general relativity a satisfactory framework for describing isolated systems exists when the cosmological constant Lambda is zero. The detailed analysis of the asymptotic structure of the gravitational field, which constitutes the framework of asymptotic flatness, lays the foundation for research in diverse areas in gravitational science. However, the framework is incomplete in two respects. First, asymptotic flatness provides well-defined expressions for physical observables such as energy and momentum as 'charges' of asymptotic symmetries at null infinity, [special character omitted] +. But the asymptotic symmetry group, called the Bondi-Metzner-Sachs group is infinite-dimensional and a tensorial expression for the 'charge' integral of an arbitrary BMS element is missing. We address this issue by providing a charge formula which is a 2-sphere integral over fields local to the 2-sphere and refers to no extraneous structure. The second, and more significant shortcoming is that observations have established that Lambda is not zero but positive in our universe. Can the framework describing isolated systems and their gravitational radiation be extended to incorporate this fact? In this dissertation we show that, unfortunately, the standard framework does not extend from the Lambda = 0 case to the Lambda > 0 case in a physically useful manner. In particular, we do not have an invariant notion of gravitational waves in the non-linear regime, nor an analog of the Bondi 'news tensor', nor positive energy theorems. In addition, we argue that the stronger boundary condition of conformal flatness of intrinsic metric on [special character omitted]+, which reduces the asymptotic symmetry group from Diff([special character omitted]) to the de Sitter group, is insufficient to characterize gravitational fluxes and is physically unreasonable. To obtain guidance for the full non-linear theory with Lambda > 0, linearized gravitational waves in de Sitter space-time are analyzed in detail. i) We show explicitly that conformal flatness of the boundary removes half the degrees of freedom of the gravitational field by hand and is not justified by physical considerations; ii) We obtain gauge invariant expressions of energy-momentum and angular momentum fluxes carried by gravitational waves in terms of fields defined at [special character omitted]+; iii) We demonstrate that the flux formulas reduce to the familiar ones in Minkowski spacetime in spite of the fact that the limit Lambda → 0 is discontinuous (since, in particular, [special character omitted]+ changes its space-like character to null in the limit); iv) We obtain a generalization of Einstein's 1918 quadrupole formula for power emission by a linearized source to include a positive Lambda; and, finally v) We show that, although energy of linearized gravitational waves can be arbitrarily negative in general, gravitational waves emitted by physically reasonable sources carry positive energy.
Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation
NASA Astrophysics Data System (ADS)
Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.
A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.
45 CFR 164.306 - Security standards: General rules.
Code of Federal Regulations, 2012 CFR
2012-10-01
... RELATED REQUIREMENTS SECURITY AND PRIVACY Security Standards for the Protection of Electronic Protected Health Information § 164.306 Security standards: General rules. (a) General requirements. Covered... covered entity to reasonably and appropriately implement the standards and implementation specifications...
45 CFR 164.306 - Security standards: General rules.
Code of Federal Regulations, 2013 CFR
2013-10-01
... RELATED REQUIREMENTS SECURITY AND PRIVACY Security Standards for the Protection of Electronic Protected Health Information § 164.306 Security standards: General rules. (a) General requirements. Covered... and appropriately implement the standards and implementation specifications as specified in this...
An interlaboratory comparison study on the measurement of elements in PM10
NASA Astrophysics Data System (ADS)
Yatkin, Sinan; Belis, Claudio A.; Gerboles, Michel; Calzolai, Giulia; Lucarelli, Franco; Cavalli, Fabrizia; Trzepla, Krystyna
2016-01-01
An inter-laboratory comparison study was conducted to measure elemental loadings on PM10 samples, collected in Ispra, a regional background/rural site in Italy, using three different XRF (X-ray Fluorescence) methods, namely Epsilon 5 by linear calibration, Quant'X by the standardless analysis, and PIXE (Particle Induced X-ray Emission) with linear calibration. A subset of samples was also analyzed by ICP-MS (Inductively Coupled Plasma-Mass Spectrometry). Several metrics including method detection limits (MDLs), precision, bias from a NIST standard reference material (SRM 2783) quoted values, relative absolute difference, orthogonal regression and the ratio of the absolute difference between the methods to claimed uncertainty were used to compare the laboratories. The MDLs were found to be comparable for many elements. Precision estimates were less than 10% for the majority of the elements. Absolute biases from SRM 2783 remained less than 20% for the majority of certified elements. The regression results of PM10 samples showed that the three XRF laboratories measured very similar mass loadings for S, K, Ti, Mn, Fe, Cu, Br, Sr and Pb with slopes within 20% of unity. The ICP-MS results confirmed the agreement and discrepancies between XRF laboratories for Al, K, Ca, Ti, V, Cu, Sr and Pb. The ICP-MS results are inconsistent with the XRF laboratories for Fe and Zn. The absolute differences between the XRF laboratories generally remained within their claimed uncertainties, showing a pattern generally consistent with the orthogonal regression results.
Leveraging prognostic baseline variables to gain precision in randomized trials
Colantuoni, Elizabeth; Rosenblum, Michael
2015-01-01
We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. PMID:25872751
Privacy-preserving outlier detection through random nonlinear data distortion.
Bhaduri, Kanishka; Stefanski, Mark D; Srivastava, Ashok N
2011-02-01
Consider a scenario in which the data owner has some private or sensitive data and wants a data miner to access them for studying important patterns without revealing the sensitive information. Privacy-preserving data mining aims to solve this problem by randomly transforming the data prior to their release to the data miners. Previous works only considered the case of linear data perturbations--additive, multiplicative, or a combination of both--for studying the usefulness of the perturbed output. In this paper, we discuss nonlinear data distortion using potentially nonlinear random data transformation and show how it can be useful for privacy-preserving anomaly detection from sensitive data sets. We develop bounds on the expected accuracy of the nonlinear distortion and also quantify privacy by using standard definitions. The highlight of this approach is to allow a user to control the amount of privacy by varying the degree of nonlinearity. We show how our general transformation can be used for anomaly detection in practice for two specific problem instances: a linear model and a popular nonlinear model using the sigmoid function. We also analyze the proposed nonlinear transformation in full generality and then show that, for specific cases, it is distance preserving. A main contribution of this paper is the discussion between the invertibility of a transformation and privacy preservation and the application of these techniques to outlier detection. The experiments conducted on real-life data sets demonstrate the effectiveness of the approach.
42 CFR 493.1239 - Standard: General laboratory systems quality assessment.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 5 2011-10-01 2011-10-01 false Standard: General laboratory systems quality... for Nonwaived Testing General Laboratory Systems § 493.1239 Standard: General laboratory systems... laboratory systems requirements specified at §§ 493.1231 through 493.1236. (b) The general laboratory systems...
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
ERIC Educational Resources Information Center
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
Parameter Recovery for the 1-P HGLLM with Non-Normally Distributed Level-3 Residuals
ERIC Educational Resources Information Center
Kara, Yusuf; Kamata, Akihito
2017-01-01
A multilevel Rasch model using a hierarchical generalized linear model is one approach to multilevel item response theory (IRT) modeling and is referred to as a one-parameter hierarchical generalized linear logistic model (1-P HGLLM). Although it has the flexibility to model nested structure of data with covariates, the model assumes the normality…
Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.
Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786
NASA Astrophysics Data System (ADS)
Fan, Zuhui
2000-01-01
The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.
40 CFR 63.7887 - What are the general standards I must meet for my affected equipment leak sources?
Code of Federal Regulations, 2014 CFR
2014-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation General Standards § 63.7887 What are the general standards I must meet for my...
40 CFR 63.7887 - What are the general standards I must meet for my affected equipment leak sources?
Code of Federal Regulations, 2010 CFR
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation General Standards § 63.7887 What are the general standards I must meet for my...
40 CFR 63.7887 - What are the general standards I must meet for my affected equipment leak sources?
Code of Federal Regulations, 2012 CFR
2012-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation General Standards § 63.7887 What are the general standards I must meet for my...
40 CFR 63.7887 - What are the general standards I must meet for my affected equipment leak sources?
Code of Federal Regulations, 2013 CFR
2013-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation General Standards § 63.7887 What are the general standards I must meet for my...
40 CFR 63.7887 - What are the general standards I must meet for my affected equipment leak sources?
Code of Federal Regulations, 2011 CFR
2011-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation General Standards § 63.7887 What are the general standards I must meet for my...
Application of General Regression Neural Network to the Prediction of LOD Change
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao
2012-01-01
Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.
Type testing of the Siemens Plessey electronic personal dosemeter.
Hirning, C R; Yuen, P S
1995-07-01
This paper presents the results of a laboratory assessment of the performance of a new type of personal dosimeter, the Electronic Personal Dosemeter made by Siemens Plessey Controls Limited. Twenty pre-production dosimeters and a reader were purchased by Ontario Hydro for the assessment. Tests were performed on radiological performance, including reproducibility, accuracy, linearity, detection threshold, energy response, angular response, neutron response, and response time. There were also tests on the effects of a variety of environmental factors, such as temperature, humidity, pulsed magnetic and electric fields, low- and high-frequency electromagnetic fields, light exposure, drop impact, vibration, and splashing. Other characteristics that were tested were alarm volume, clip force, and battery life. The test results were compared with the relevant requirements of three standards: an Ontario Hydro standard for personal alarming dosimeters, an International Electrotechnical Commission draft standard for direct reading personal dose monitors, and an International Electrotechnical Commission standard for thermoluminescence dosimetry systems for personal monitoring. In general, the performance of the Electronic Personal Dosemeter was found to be quite acceptable: it met most of the relevant requirements of the three standards. However, the following deficiencies were found: slow response time; sensitivity to high-frequency electromagnetic fields; poor resistance to dropping; and an alarm that was not loud enough. In addition, the response of the electronic personal dosimeter to low-energy beta rays may be too low for some applications. Problems were experienced with the reliability of operation of the pre-production dosimeters used in these tests.
Modulation Transfer Function (MTF) measurement techniques for lenses and linear detector arrays
NASA Technical Reports Server (NTRS)
Schnabel, J. J., Jr.; Kaishoven, J. E., Jr.; Tom, D.
1984-01-01
Application is the determination of the Modulation Transfer Function (MTF) for linear detector arrays. A system set up requires knowledge of the MTF of the imaging lens. Procedure for this measurement is described for standard optical lab equipment. Given this information, various possible approaches to MTF measurement for linear arrays is described. The knife edge method is then described in detail.
Mighell, A D
2001-01-01
In theory, physical crystals can be represented by idealized mathematical lattices. Under appropriate conditions, these representations can be used for a variety of purposes such as identifying, classifying, and understanding the physical properties of materials. Critical to these applications is the ability to construct a unique representation of the lattice. The vital link that enabled this theory to be realized in practice was provided by the 1970 paper on the determination of reduced cells. This seminal paper led to a mathematical approach to lattice analysis initially based on systematic reduction procedures and the use of standard cells. Subsequently, the process evolved to a matrix approach based on group theory and linear algebra that offered a more abstract and powerful way to look at lattices and their properties. Application of the reduced cell to both database work and laboratory research at NIST was immediately successful. Currently, this cell and/or procedures based on reduction are widely and routinely used by the general scientific community: (i) for calculating standard cells for the reporting of crystalline materials, (ii) for classifying materials, (iii) in crystallographic database work (iv) in routine x-ray and neutron diffractometry, and (v) in general crystallographic research. Especially important is its use in symmetry determination and in identification. The focus herein is on the role of the reduced cell in lattice symmetry determination.
Mighell, Alan D.
2001-01-01
In theory, physical crystals can be represented by idealized mathematical lattices. Under appropriate conditions, these representations can be used for a variety of purposes such as identifying, classifying, and understanding the physical properties of materials. Critical to these applications is the ability to construct a unique representation of the lattice. The vital link that enabled this theory to be realized in practice was provided by the 1970 paper on the determination of reduced cells. This seminal paper led to a mathematical approach to lattice analysis initially based on systematic reduction procedures and the use of standard cells. Subsequently, the process evolved to a matrix approach based on group theory and linear algebra that offered a more abstract and powerful way to look at lattices and their properties. Application of the reduced cell to both database work and laboratory research at NIST was immediately successful. Currently, this cell and/or procedures based on reduction are widely and routinely used by the general scientific community: (i) for calculating standard cells for the reporting of crystalline materials, (ii) for classifying materials, (iii) in crystallographic database work (iv) in routine x-ray and neutron diffractometry, and (v) in general crystallographic research. Especially important is its use in symmetry determination and in identification. The focus herein is on the role of the reduced cell in lattice symmetry determination. PMID:27500059
Hospital costs estimation and prediction as a function of patient and admission characteristics.
Ramiarina, Robert; Almeida, Renan Mvr; Pereira, Wagner Ca
2008-01-01
The present work analyzed the association between hospital costs and patient admission characteristics in a general public hospital in the city of Rio de Janeiro, Brazil. The unit costs method was used to estimate inpatient day costs associated to specific hospital clinics. With this aim, three "cost centers" were defined in order to group direct and indirect expenses pertaining to the clinics. After the costs were estimated, a standard linear regression model was developed for correlating cost units and their putative predictors (the patients gender and age, the admission type (urgency/elective), ICU admission (yes/no), blood transfusion (yes/no), the admission outcome (death/no death), the complexity of the medical procedures performed, and a risk-adjustment index). Data were collected for 3100 patients, January 2001-January 2003. Average inpatient costs across clinics ranged from (US$) 1135 [Orthopedics] to 3101 [Cardiology]. Costs increased according to increases in the risk-adjustment index in all clinics, and the index was statistically significant in all clinics except Urology, General surgery, and Clinical medicine. The occupation rate was inversely correlated to costs, and age had no association with costs. The (adjusted) per cent of explained variance varied between 36.3% [Clinical medicine] and 55.1% [Thoracic surgery clinic]. The estimates are an important step towards the standardization of hospital costs calculation, especially for countries that lack formal hospital accounting systems.
Satisfying Friendship Maintenance Expectations: The Role of Friendship Standards and Biological Sex
ERIC Educational Resources Information Center
Hall, Jeffrey A.; Larson, Kiley A.; Watts, Amber
2011-01-01
The ideal standards model predicts linear relationship among friendship standards, expectation fulfillment, and relationship satisfaction. Using a diary method, 197 participants reported on expectation fulfillment in interactions with one best, one close, and one casual friend (N = 591) over five days (2,388 interactions). Using multilevel…
A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Leonov, Arkady I.
2002-01-01
The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.
General job stress: a unidimensional measure and its non-linear relations with outcome variables.
Yankelevich, Maya; Broadfoot, Alison; Gillespie, Jennifer Z; Gillespie, Michael A; Guidroz, Ashley
2012-04-01
This article aims to examine the non-linear relations between a general measure of job stress [Stress in General (SIG)] and two outcome variables: intentions to quit and job satisfaction. In so doing, we also re-examine the factor structure of the SIG and determine that, as a two-factor scale, it obscures non-linear relations with outcomes. Thus, in this research, we not only test for non-linear relations between stress and outcome variables but also present an updated version of the SIG scale. Using two distinct samples of working adults (sample 1, N = 589; sample 2, N = 4322), results indicate that a more parsimonious eight-item SIG has better model-data fit than the 15-item two-factor SIG and that the eight-item SIG has non-linear relations with job satisfaction and intentions to quit. Specifically, the revised SIG has an inverted curvilinear J-shaped relation with job satisfaction such that job satisfaction drops precipitously after a certain level of stress; the SIG has a J-shaped curvilinear relation with intentions to quit such that turnover intentions increase exponentially after a certain level of stress. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Dima, Alexandru; Vernizzi, Filippo
2018-05-01
Screening mechanisms are essential features of dark energy models mediating a fifth force on large scales. We study the regime of strong scalar field nonlinearities, known as Vainshtein screening, in the most general scalar-tensor theories propagating a single scalar degree of freedom. We first develop an effective approach to parametrize cosmological perturbations beyond linear order for these theories. In the quasistatic limit, the fully nonlinear effective Lagrangian contains six independent terms, one of which starts at cubic order in perturbations. We compute the two gravitational potentials around a spherical body. Outside and near the body, screening reproduces standard gravity, with a modified gravitational coupling. Inside the body, the two potentials are different and depend on the density profile, signalling the breaking of the Vainshtein screening. We provide the most general expressions for these modifications, revising and extending previous results. We apply our findings to show that the combination of the GW170817 event, the Hulse-Taylor pulsar and stellar structure physics, constrain the parameters of these general theories at the level of 10-1, and of Gleyzes-Langlois-Piazza-Vernizzi theories at the level of 10-2.
40 CFR 439.3 - General pretreatment standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false General pretreatment standards. 439.3 Section 439.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PHARMACEUTICAL MANUFACTURING POINT SOURCE CATEGORY General § 439.3 General pretreatment...
Rönnegård, L; Felleki, M; Fikse, W F; Mulder, H A; Strandberg, E
2013-04-01
Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this paper is to estimate breeding values for micro-environmental sensitivity (vEBV) in milk yield and somatic cell score, and their associated variance components, on a large dairy cattle data set having more than 1.6 million records. Estimation of variance components, ordinary breeding values, and vEBV was performed using standard variance component estimation software (ASReml), applying the methodology for double hierarchical generalized linear models. Estimation using ASReml took less than 7 d on a Linux server. The genetic standard deviations for residual variance were 0.21 and 0.22 for somatic cell score and milk yield, respectively, which indicate moderate genetic variance for residual variance and imply that a standard deviation change in vEBV for one of these traits would alter the residual variance by 20%. This study shows that estimation of variance components, estimated breeding values and vEBV, is feasible for large dairy cattle data sets using standard variance component estimation software. The possibility to select for uniformity in Holstein dairy cattle based on these estimates is discussed. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Yoon, Donhee; Lee, Dongkun; Lee, Jong-Hyeon; Cha, Sangwon; Oh, Han Bin
2015-01-30
Quantifying polymers by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) with a conventional crystalline matrix generally suffers from poor sample-to-sample or shot-to-shot reproducibility. An ionic-liquid matrix has been demonstrated to mitigate these reproducibility issues by providing a homogeneous sample surface, which is useful for quantifying polymers. In the present study, we evaluated the use of an ionic liquid matrix, i.e., 1-methylimidazolium α-cyano-4-hydroxycinnamate (1-MeIm-CHCA), to quantify polyhexamethylene guanidine (PHMG) samples that impose a critical health hazard when inhaled in the form of droplets. MALDI-TOF mass spectra were acquired for PHMG oligomers using a variety of ionic-liquid matrices including 1-MeIm-CHCA. Calibration curves were constructed by plotting the sum of the PHMG oligomer peak areas versus PHMG sample concentration with a variety of peptide internal standards. Compared with the conventional crystalline matrix, the 1-MeIm-CHCA ionic-liquid matrix had much better reproducibility (lower standard deviations). Furthermore, by using an internal peptide standard, good linear calibration plots could be obtained over a range of PMHG concentrations of at least 4 orders of magnitude. This study successfully demonstrated that PHMG samples can be quantitatively characterized by MALDI-TOFMS with an ionic-liquid matrix and an internal standard. Copyright © 2014 John Wiley & Sons, Ltd.
Microbioassay of Antimicrobial Agents
Simon, Harold J.; Yin, E. Jong
1970-01-01
A previously described agar-diffusion technique for microbioassay of antimicrobial agents has been modified to increase sensitivity of the technique and to extend the range of antimicrobial agents to which it is applicable. This microtechnique requires only 0.02 ml of an unknown test sample for assay, and is capable of measuring minute concentrations of antibiotics in buffer, serum, and urine. In some cases, up to a 20-fold increase in sensitivity is gained relative to other published standardized methods and the error of this method is less than ±5%. Buffer standard curves have been established for this technique, concurrently with serum standard curves, yielding information on antimicrobial serum-binding and demonstrating linearity of the data points compared to the estimated regression line for the microconcentration ranges covered by this technique. This microassay technique is particularly well suited for pediatric research and for other investigations where sample volumes are small and quantitative accuracy is desired. Dilution of clinical samples to attain concentrations falling with the range of this assay makes the technique readily adaptable and suitable for general clinical pharmacological studies. The microassay technique has been standardized in buffer solutions and in normal human serum pools for the following antimicrobials: ampicillin, methicillin, penicillin G, oxacillin, cloxacillin, dicloxacillin, cephaloglycin, cephalexin, cephaloridine, cephalothin, erythromycin, rifamycin amino methyl piperazine, kanamycin, neomycin, streptomycin, colistin, polymyxin B, doxycycline, minocycline, oxytetracycline, tetracycline, and chloramphenicol. PMID:4986725
Fernandes-Monteiro, Alice G; Trindade, Gisela F; Yamamura, Anna MY; Moreira, Otacilio C; de Paula, Vanessa S; Duarte, Ana Cláudia M; Britto, Constança; Lima, Sheila Maria B
2015-01-01
The development and production of viral vaccines, in general, involve several steps that need the monitoring of viral load throughout the entire process. Applying a 2-step quantitative reverse transcription real time PCR assay (RT-qPCR), viral load can be measured and monitored in a few hours. In this context, the development, standardization and validation of a RT-qPCR test to quickly and efficiently quantify yellow fever virus (YFV) in all stages of vaccine production are extremely important. To serve this purpose we used a plasmid construction containing the NS5 region from 17DD YFV to generate the standard curve and to evaluate parameters such as linearity, precision and specificity against other flavivirus. Furthermore, we defined the limits of detection as 25 copies/reaction, and quantification as 100 copies/reaction for the test. To ensure the quality of the method, reference controls were established in order to avoid false negative results. The qRT-PCR technique based on the use of TaqMan probes herein standardized proved to be effective for determining yellow fever viral load both in vivo and in vitro, thus becoming a very important tool to assure the quality control for vaccine production and evaluation of viremia after vaccination or YF disease. PMID:26011746
Fernandes-Monteiro, Alice G; Trindade, Gisela F; Yamamura, Anna M Y; Moreira, Otacilio C; de Paula, Vanessa S; Duarte, Ana Cláudia M; Britto, Constança; Lima, Sheila Maria B
2015-01-01
The development and production of viral vaccines, in general, involve several steps that need the monitoring of viral load throughout the entire process. Applying a 2-step quantitative reverse transcription real time PCR assay (RT-qPCR), viral load can be measured and monitored in a few hours. In this context, the development, standardization and validation of a RT-qPCR test to quickly and efficiently quantify yellow fever virus (YFV) in all stages of vaccine production are extremely important. To serve this purpose we used a plasmid construction containing the NS5 region from 17DD YFV to generate the standard curve and to evaluate parameters such as linearity, precision and specificity against other flavivirus. Furthermore, we defined the limits of detection as 25 copies/reaction, and quantification as 100 copies/reaction for the test. To ensure the quality of the method, reference controls were established in order to avoid false negative results. The qRT-PCR technique based on the use of TaqMan probes herein standardized proved to be effective for determining yellow fever viral load both in vivo and in vitro, thus becoming a very important tool to assure the quality control for vaccine production and evaluation of viremia after vaccination or YF disease.
NASA Technical Reports Server (NTRS)
Park, K. C.; Belvin, W. Keith
1990-01-01
A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.
Second-order discrete Kalman filtering equations for control-structure interaction simulations
NASA Technical Reports Server (NTRS)
Park, K. C.; Belvin, W. Keith; Alvin, Kenneth F.
1991-01-01
A general form for the first-order representation of the continuous, second-order linear structural dynamics equations is introduced in order to derive a corresponding form of first-order Kalman filtering equations (KFE). Time integration of the resulting first-order KFE is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete KFE involving only symmetric, N x N solution matrix.
Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.
ERIC Educational Resources Information Center
Alexopoulos, John; Abraham, Paul
2001-01-01
Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…
Linear and Nonlinear Thinking: A Multidimensional Model and Measure
ERIC Educational Resources Information Center
Groves, Kevin S.; Vance, Charles M.
2015-01-01
Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…
ERIC Educational Resources Information Center
Stohlmann, Micah Stephen
2012-01-01
This case study explored the impact of a standards-based mathematics and pedagogy class on preservice elementary teachers' beliefs and conceptual subject matter knowledge of linear functions. The framework for the standards-based mathematics and pedagogy class in this study involved the National Council of Teachers of Mathematics Standards,…
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
Stevanovic, Stefan; Pervan, Boris
2018-01-01
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250
Linearization instability for generic gravity in AdS spacetime
NASA Astrophysics Data System (ADS)
Altas, Emel; Tekin, Bayram
2018-01-01
In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.
42 CFR 493.1469 - Standard: Cytology general supervisor qualifications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Cytology general supervisor... Nonwaived Testing Laboratories Performing High Complexity Testing § 493.1469 Standard: Cytology general supervisor qualifications. The cytology general supervisor must be qualified to supervise cytology services...
42 CFR 493.1469 - Standard: Cytology general supervisor qualifications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 5 2011-10-01 2011-10-01 false Standard: Cytology general supervisor... Nonwaived Testing Laboratories Performing High Complexity Testing § 493.1469 Standard: Cytology general supervisor qualifications. The cytology general supervisor must be qualified to supervise cytology services...
42 CFR 493.1469 - Standard: Cytology general supervisor qualifications.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 5 2014-10-01 2014-10-01 false Standard: Cytology general supervisor... Nonwaived Testing Laboratories Performing High Complexity Testing § 493.1469 Standard: Cytology general supervisor qualifications. The cytology general supervisor must be qualified to supervise cytology services...
42 CFR 493.1469 - Standard: Cytology general supervisor qualifications.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 5 2012-10-01 2012-10-01 false Standard: Cytology general supervisor... Nonwaived Testing Laboratories Performing High Complexity Testing § 493.1469 Standard: Cytology general supervisor qualifications. The cytology general supervisor must be qualified to supervise cytology services...
42 CFR 493.1469 - Standard: Cytology general supervisor qualifications.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 5 2013-10-01 2013-10-01 false Standard: Cytology general supervisor... Nonwaived Testing Laboratories Performing High Complexity Testing § 493.1469 Standard: Cytology general supervisor qualifications. The cytology general supervisor must be qualified to supervise cytology services...
Neoclassical transport including collisional nonlinearity.
Candy, J; Belli, E A
2011-06-10
In the standard δf theory of neoclassical transport, the zeroth-order (Maxwellian) solution is obtained analytically via the solution of a nonlinear equation. The first-order correction δf is subsequently computed as the solution of a linear, inhomogeneous equation that includes the linearized Fokker-Planck collision operator. This equation admits analytic solutions only in extreme asymptotic limits (banana, plateau, Pfirsch-Schlüter), and so must be solved numerically for realistic plasma parameters. Recently, numerical codes have appeared which attempt to compute the total distribution f more accurately than in the standard ordering by retaining some nonlinear terms related to finite-orbit width, while simultaneously reusing some form of the linearized collision operator. In this work we show that higher-order corrections to the distribution function may be unphysical if collisional nonlinearities are ignored.
Study on sampling of continuous linear system based on generalized Fourier transform
NASA Astrophysics Data System (ADS)
Li, Huiguang
2003-09-01
In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.
Raymond L. Czaplewski
1973-01-01
A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron
We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.
ERIC Educational Resources Information Center
Bashaw, W. L., Ed.; Findley, Warren G., Ed.
This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali
2015-01-01
This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…
Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron
2016-06-08
We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.
Kim, Sunny Jung; Marsch, Lisa A; Guarino, Honoria; Acosta, Michelle C; Aponte-Melendez, Yesenia
2015-12-01
Although empirical evidence for the effectiveness of technology-mediated interventions for substance use disorders is rapidly growing, the role of baseline characteristics of patients in predicting treatment outcomes of a technology-based therapy is largely unknown. Participants were randomly assigned to either standard methadone maintenance treatment or reduced standard treatment combined with the computer-based therapeutic education system (TES). An array of demographic and behavioral characteristics of participants (N=160) was measured at baseline. Opioid abstinence and treatment retention were measured weekly for a 52-week intervention period. Generalized linear model and Cox-regression were used to estimate the predictive roles of baseline characteristics in predicting treatment outcomes. We found significant predictors of opioid abstinence and treatment retention within and across conditions. Among 21 baseline characteristics of participants, employment status, anxiety, and ambivalent attitudes toward substance use predicted better opioid abstinence in the reduced-standard-plus-TES condition compared to standard treatment. Participants who had used cocaine/crack in the past 30 days at baseline showed lower dropout rates in standard treatment, whereas those who had not used exhibited lower dropout rates in the reduced-standard-plus-TES condition. This study is the first randomized controlled trial, evaluating over a 12-month period, how various aspects of participant characteristics impact outcomes for treatments that do or do not include technology-based therapy. Compared to standard alone treatment, including TES as part of the care was preferable for patients who were employed, highly anxious, and ambivalent about substance use and did not produce worse outcomes for any subgroups of participants. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
1993-01-31
28 Controllability and Observability ............................. .32 ’ Separation of Learning and Control ... ... 37 Linearization via... Linearization via Transformation of Coordinates and Nonlinear Fedlback . .1 Main Result ......... .............................. 13 Discussion...9 2.1 Basic Structure of a NLM........................ . 2.2 General Structure of NNLM .......................... .28 2.3 Linear System
A new line-of-sight approach to the non-linear Cosmic Microwave Background
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Koyama, Kazuya; Pettinari, Guido W., E-mail: christian.fidler@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: guido.pettinari@gmail.com
2015-04-01
We develop the transport operator formalism, a new line-of-sight integration framework to calculate the anisotropies of the Cosmic Microwave Background (CMB) at the linear and non-linear level. This formalism utilises a transformation operator that removes all inhomogeneous propagation effects acting on the photon distribution function, thus achieving a split between perturbative collisional effects at recombination and non-perturbative line-of-sight effects at later times. The former can be computed in the framework of standard cosmological perturbation theory with a second-order Boltzmann code such as SONG, while the latter can be treated within a separate perturbative scheme allowing the use of non-linear Newtonianmore » potentials. We thus provide a consistent framework to compute all physical effects contained in the Boltzmann equation and to combine the standard remapping approach with Boltzmann codes at any order in perturbation theory, without assuming that all sources are localised at recombination.« less
Steering of Frequency Standards by the Use of Linear Quadratic Gaussian Control Theory
NASA Technical Reports Server (NTRS)
Koppang, Paul; Leland, Robert
1996-01-01
Linear quadratic Gaussian control is a technique that uses Kalman filtering to estimate a state vector used for input into a control calculation. A control correction is calculated by minimizing a quadratic cost function that is dependent on both the state vector and the control amount. Different penalties, chosen by the designer, are assessed by the controller as the state vector and control amount vary from given optimal values. With this feature controllers can be designed to force the phase and frequency differences between two standards to zero either more or less aggressively depending on the application. Data will be used to show how using different parameters in the cost function analysis affects the steering and the stability of the frequency standards.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnich, Glenn; Troessaert, Cedric
2009-04-15
In the reduced phase space of electromagnetism, the generator of duality rotations in the usual Poisson bracket is shown to generate Maxwell's equations in a second, much simpler Poisson bracket. This gives rise to a hierarchy of bi-Hamiltonian evolution equations in the standard way. The result can be extended to linearized Yang-Mills theory, linearized gravity, and massless higher spin gauge fields.
Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process
NASA Astrophysics Data System (ADS)
Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas
2018-05-01
This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.
Zietze, Stefan; Müller, Rainer H; Brecht, René
2008-03-01
In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.
Query construction, entropy, and generalization in neural-network models
NASA Astrophysics Data System (ADS)
Sollich, Peter
1994-05-01
We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.
41 CFR 50-204.2 - General safety and health standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... health standards. 50-204.2 Section 50-204.2 Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS General Safety and Health Standards § 50-204.2 General safety and health...
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
... morphea, linear scleroderma, and scleroderma en coup de sabre. Each type can be subdivided further and some ... described for morphea. Linear scleroderma en coup de sabre is the term generally applied when children have ...
Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2010-01-01
In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…
Using structural equation modeling for network meta-analysis.
Tu, Yu-Kang; Wu, Yun-Chun
2017-07-14
Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.
Generalized Clifford Algebras as Algebras in Suitable Symmetric Linear Gr-Categories
NASA Astrophysics Data System (ADS)
Cheng, Tao; Huang, Hua-Lin; Yang, Yuping
2016-01-01
By viewing Clifford algebras as algebras in some suitable symmetric Gr-categories, Albuquerque and Majid were able to give a new derivation of some well known results about Clifford algebras and to generalize them. Along the same line, Bulacu observed that Clifford algebras are weak Hopf algebras in the aforementioned categories and obtained other interesting properties. The aim of this paper is to study generalized Clifford algebras in a similar manner and extend the results of Albuquerque, Majid and Bulacu to the generalized setting. In particular, by taking full advantage of the gauge transformations in symmetric linear Gr-categories, we derive the decomposition theorem and provide categorical weak Hopf structures for generalized Clifford algebras in a conceptual and simpler manner.
A Block-LU Update for Large-Scale Linear Programming
1990-01-01
linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and
Linear and non-linear perturbations in dark energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escamilla-Rivera, Celia; Casarini, Luciano; Fabris, Júlio C.
2016-11-01
In this work we discuss observational aspects of three time-dependent parameterisations of the dark energy equation of state w ( z ). In order to determine the dynamics associated with these models, we calculate their background evolution and perturbations in a scalar field representation. After performing a complete treatment of linear perturbations, we also show that the non-linear contribution of the selected w ( z ) parameterisations to the matter power spectra is almost the same for all scales, with no significant difference from the predictions of the standard ΛCDM model.
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
40 CFR 439.4 - General limitation or standard for pH.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false General limitation or standard for pH. 439.4 Section 439.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PHARMACEUTICAL MANUFACTURING POINT SOURCE CATEGORY General § 439.4 General...
Typical Werner states satisfying all linear Bell inequalities with dichotomic measurements
NASA Astrophysics Data System (ADS)
Luo, Ming-Xing
2018-04-01
Quantum entanglement as a special resource inspires various distinct applications in quantum information processing. Unfortunately, it is NP-hard to detect general quantum entanglement using Bell testing. Our goal is to investigate quantum entanglement with white noises that appear frequently in experiment and quantum simulations. Surprisingly, for almost all multipartite generalized Greenberger-Horne-Zeilinger states there are entangled noisy states that satisfy all linear Bell inequalities consisting of full correlations with dichotomic inputs and outputs of each local observer. This result shows generic undetectability of mixed entangled states in contrast to Gisin's theorem of pure bipartite entangled states in terms of Bell nonlocality. We further provide an accessible method to show a nontrivial set of noisy entanglement with small number of parties satisfying all general linear Bell inequalities. These results imply typical incompleteness of special Bell theory in explaining entanglement.
The generation of gravitational waves. I - Weak-field sources
NASA Technical Reports Server (NTRS)
Thorne, K. S.; Kovacs, S. J.
1975-01-01
This paper derives and summarizes a 'plug-in-and-grind' formalism for calculating the gravitational waves emitted by any system with weak internal gravitational fields. If the internal fields have negligible influence on the system's motions, the formalism reduces to standard 'linearized theory'. Independent of the effects of gravity on the motions, the formalism reduces to the standard 'quadrupole-moment formalism' if the motions are slow and internal stresses are weak. In the general case, the formalism expresses the radiation in terms of a retarded Green's function for slightly curved spacetime and breaks the Green's function integral into five easily understood pieces: direct radiation, produced directly by the motions of the source; whump radiation, produced by the 'gravitational stresses' of the source; transition radiation, produced by a time-changing time delay ('Shapiro effect') in the propagation of the nonradiative 1/r field of the source; focusing radiation, produced when one portion of the source focuses, in a time-dependent way, the nonradiative field of another portion of the source; and tail radiation, produced by 'back-scatter' of the nonradiative field in regions of focusing.
The generation of gravitational waves. 1. Weak-field sources: A plug-in-and-grind formalism
NASA Technical Reports Server (NTRS)
Thorne, K. S.; Kovacs, S. J.
1974-01-01
A plug-in-and-grind formalism is derived for calculating the gravitational waves emitted by any system with weak internal gravitational fields. If the internal fields have negligible influence on the system's motions, then the formalism reduces to standard linearized theory. Whether or not gravity affects the motions, if the motions are slow and internal stresses are weak, then the new formalism reduces to the standard quadrupole-moment formalism. In the general case the new formalism expresses the radiation in terms of a retarded Green's function for slightly curved spacetime, and then breaks the Green's-function integral into five easily understood pieces: direct radiation, produced directly by the motions of the sources; whump radiation, produced by the the gravitational stresses of the source; transition radiation, produced by a time-changing time delay (Shapiro effect) in the propagation of the nonradiative, 1/r field of the source; focussing radiation produced when one portion of the source focusses, in a time-dependent way, the nonradiative field of another portion of the source, and tail radiation, produced by backscatter of the nonradiative field in regions of focussing.
Karaçelik, Ayça Aktaş; Küçük, Murat; İskefiyeli, Zeynep; Aydemir, Sezgin; De Smet, Seppe; Miserez, Bram; Sandra, Patrick
2015-05-15
Antioxidant activity of the juice and seed and skin extracts prepared with methanol, acetonitrile, and water of Viburnum opulus L. grown in Eastern Black Sea Region were studied with an on-line HPLC-ABTS method and off-line antioxidant methods, among which a linear positive correlation was observed. The fruit extracts were analysed with the HPLC-UV method optimised with 14 standard phenolics. Identification of the phenolic components in the juice was made using an HPLC-UV-ESI-MS method. Nineteen phenolic compounds in juice were identified by comparing the retention times and mass spectra with those of the standards and the phenolics reported in the literature. The major peaks in the juice belonged to coumaroyl-quinic acid, chlorogenic acid, procyanidin B2, and procyanidin trimer. Quite different antioxidant composition profiles were obtained from the extracts with the solvents of different polarities. The antioxidant activities of the seed extracts were higher than those of the skin extracts in general. Copyright © 2014 Elsevier Ltd. All rights reserved.
Performance evaluation of BC-3200 hematology analyzer in a university hospital.
Peng, L; Bai, L; Nie, L; Wu, Z; Yan, C
2008-06-01
The BC-3200 automated hematology analyzer was evaluated and compared with the Beckman-Coulter AcT (Ac.T diff 2) 3-part differential hematology analyzer. The BC-3200 was evaluated according to guidelines published by the International Committee for Standardization in Hematology (ICSH), Clinical and Laboratory Standards Institute (CLSI), and Department of Food and Drug Administration (FDA). The results demonstrated no background, minimal carryover (<0.5%), and excellent linearity for hemoglobin (Hb) level, white blood cell (WBC), red blood cell (RBC), and platelet (PLT) counts (>0.998). Precision was generally acceptable for all complete blood count (CBC) parameters; coefficients of variation (CVs) were within the manufacturer's claims and CVs of CBC parameters, including WBC, RBC and PLT counts, Hb and mean corpuscular volume, were <6%. Correlation between the BC-3200 and Ac.T diff 2 was excellent (r > 0.98) for all major CBC parameters (WBC, RBC, and PLT counts and Hb). We conclude that the overall performance of the BC-3200 is excellent and compares well with that of the Coulter Ac.T diff 2.
Trägårdh, M; Lindén, D; Ploj, K; Johansson, A; Turnbull, A; Carlsson, B; Antonsson, M
2017-01-01
In this study, we present the translational modeling used in the discovery of AZD1979, a melanin‐concentrating hormone receptor 1 (MCHr1) antagonist aimed for treatment of obesity. The model quantitatively connects the relevant biomarkers and thereby closes the scaling path from rodent to man, as well as from dose to effect level. The complexity of individual modeling steps depends on the quality and quantity of data as well as the prior information; from semimechanistic body‐composition models to standard linear regression. Key predictions are obtained by standard forward simulation (e.g., predicting effect from exposure), as well as non‐parametric input estimation (e.g., predicting energy intake from longitudinal body‐weight data), across species. The work illustrates how modeling integrates data from several species, fills critical gaps between biomarkers, and supports experimental design and human dose‐prediction. We believe this approach can be of general interest for translation in the obesity field, and might inspire translational reasoning more broadly. PMID:28556607