Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Generalized adjustment by least squares ( GALS).
Elassal, A.A.
1983-01-01
The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author
Quantization of general linear electrodynamics
Rivera, Sergio; Schuller, Frederic P.
2011-03-15
General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.
Simple circuit provides adjustable voltage with linear temperature variation
NASA Technical Reports Server (NTRS)
Moede, L. W.
1964-01-01
A bridge circuit giving an adjustable output voltage that varies linearly with temperature is formed with temperature compensating diodes in one leg. A resistor voltage divider adjusts to temperature range across the bridge. The circuit is satisfactory over the temperature range of minus 20 degrees centigrade to plus 80 degrees centigrade.
Adjusting power for a baseline covariate in linear models
Glueck, Deborah H.; Muller, Keith E.
2009-01-01
SUMMARY The analysis of covariance provides a common approach to adjusting for a baseline covariate in medical research. With Gaussian errors, adding random covariates does not change either the theory or the computations of general linear model data analysis. However, adding random covariates does change the theory and computation of power analysis. Many data analysts fail to fully account for this complication in planning a study. We present our results in five parts. (i) A review of published results helps document the importance of the problem and the limitations of available methods. (ii) A taxonomy for general linear multivariate models and hypotheses allows identifying a particular problem. (iii) We describe how random covariates introduce the need to consider quantiles and conditional values of power. (iv) We provide new exact and approximate methods for power analysis of a range of multivariate models with a Gaussian baseline covariate, for both small and large samples. The new results apply to the Hotelling-Lawley test and the four tests in the “univariate” approach to repeated measures (unadjusted, Huynh-Feldt, Geisser-Greenhouse, Box). The techniques allow rapid calculation and an interactive, graphical approach to sample size choice. (v) Calculating power for a clinical trial of a treatment for increasing bone density illustrates the new methods. We particularly recommend using quantile power with a new Satterthwaite-style approximation. PMID:12898543
Adjustable permanent quadrupoles for the next linear collider
James T. Volk et al.
2001-06-22
The proposed Next Linear Collider (NLC) will require over 1400 adjustable quadrupoles between the main linacs' accelerator structures. These 12.7 mm bore quadrupoles will have a range of integrated strength from 0.6 to 138 Tesla, with a maximum gradient of 141 Tesla per meter, an adjustment range of +0 to {minus}20% and effective lengths from 324 mm to 972 mm. The magnetic center must remain stable to within 1 micron during the 20% adjustment. In an effort to reduce costs and increase reliability, several designs using hybrid permanent magnets have been developed. Four different prototypes have been built. All magnets have iron poles and use Samarium Cobalt to provide the magnetic fields. Two use rotating permanent magnetic material to vary the gradient, one uses a sliding shunt to vary the gradient and the fourth uses counter rotating magnets. Preliminary data on gradient strength, temperature stability, and magnetic center position stability are presented. These data are compared to an equivalent electromagnetic prototype.
Semi-Parametric Generalized Linear Models.
1985-08-01
is nonsingular, upper triangular, and of full rank r. It is known (Dongarra et al., 1979) that G-1 FT is the Moore - Penrose inverse of L . Therefore... GENERALIZED LINEAR pq Mathematics Research Center University of Wisconsin-Madison 610 Walnut Street Madison, Wisconsin 53705 TI C August 1985 E T NOV 7 8...North Carolina 27709 -. -.. . - -.-. g / 6 O5’o UNIVERSITY OF WISCONSIN-MADISON MATHD4ATICS RESEARCH CENTER SD4I-PARAMETRIC GENERALIZED LINEAR MODELS
Linear and nonlinear generalized Fourier transforms.
Pelloni, Beatrice
2006-12-15
This article presents an overview of a transform method for solving linear and integrable nonlinear partial differential equations. This new transform method, proposed by Fokas, yields a generalization and unification of various fundamental mathematical techniques and, in particular, it yields an extension of the Fourier transform method.
Extended Generalized Linear Latent and Mixed Model
ERIC Educational Resources Information Center
Segawa, Eisuke; Emery, Sherry; Curry, Susan J.
2008-01-01
The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…
Alternative approach to general coupled linear optics
Wolski, Andrzej
2005-11-29
The Twiss parameters provide a convenient description of beam optics in uncoupled linear beamlines. For coupled beamlines, a variety of approaches are possible for describing the linear optics; here, we propose an approach and notation that naturally generalizes the familiar Twiss parameters to the coupled case in three degrees of freedom. Our approach is based on an eigensystem analysis of the matrix of second-order beam moments, or alternatively (in the case of a storage ring) on an eigensystem analysis of the linear single-turn map. The lattice functions that emerge from this approach have an interpretation that is conceptually very simple: in particular, the lattice functions directly relate the beam distribution in phase space to the invariant emittances. To emphasize the physical significance of the coupled lattice functions, we develop the theory from first principles, using only the assumption of linear symplectic transport. We also give some examples of the application of this approach, demonstrating its advantages of conceptual and notational simplicity.
Army General Fund Adjustments Not Adequately Documented or Supported
2016-07-26
statements were unreliable and lacked an adequate audit trail. Furthermore, DoD and Army managers could not rely on the data in their accounting...risk that AGF financial statements will be materially misstated and the Army will not achieve audit readiness by the congressionally mandated...and $6.5 trillion in yearend adjustments made to Army General Fund data during FY 2015 financial statement compilation. We conducted this audit in
Permutation inference for the general linear model
Winkler, Anderson M.; Ridgway, Gerard R.; Webster, Matthew A.; Smith, Stephen M.; Nichols, Thomas E.
2014-01-01
Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (glms) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on glm parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm – the “randomise” algorithm – for permutation inference with the glm. PMID:24530839
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.
Inverting Glacial Isostatic Adjustment beyond linear viscoelasticity using Burgers rheology
NASA Astrophysics Data System (ADS)
Caron, L.; Greff-Lefftz, M.; Fleitout, L.; Metivier, L.; Rouby, H.
2014-12-01
In Glacial Isostatic Adjustment (GIA) inverse modeling, the usual assumption for the mantle rheology is the Maxwell model, which exhibits constant viscosity over time. However, mineral physics experiments and post-seismic observations show evidence of a transient component in the deformation of the shallow mantle, with a short-term viscosity lower than the long-term one. In these studies, the resulting rheology is modeled by a Burgers material: such rheology is indeed expected as the mantle is a mixture of materials with different viscosities. We propose to apply this rheology for the whole viscoelastic mantle, and, using a Bayesian MCMC inverse formalism for GIA during the last glacial cycle, study its impact on estimations of viscosity values, elastic thickness of the lithosphere, and ice distribution. To perform this inversion, we use a global dataset of sea level records, the geological constraints of ice-sheet margins, and present-day GPS data as well as satellite gravimetry. Our ambition is to present not only the best fitting model, but also the range of possible solutions (within the explored space of parameters) with their respective probability of explaining the data. Our first results indicate that compared to the Maxwell models, the Burgers models involve a larger lower mantle viscosity and thicker ice over Fennoscandia and Canada.
An evaluation of bias in propensity score-adjusted non-linear regression models.
Wan, Fei; Mitra, Nandita
2016-04-19
Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.
ERIC Educational Resources Information Center
Cheong, Yuk Fai; Kamata, Akihito
2013-01-01
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
Covariate-Adjusted Linear Mixed Effects Model with an Application to Longitudinal Data
Nguyen, Danh V.; Şentürk, Damla; Carroll, Raymond J.
2009-01-01
Linear mixed effects (LME) models are useful for longitudinal data/repeated measurements. We propose a new class of covariate-adjusted LME models for longitudinal data that nonparametrically adjusts for a normalizing covariate. The proposed approach involves fitting a parametric LME model to the data after adjusting for the nonparametric effects of a baseline confounding covariate. In particular, the effect of the observable covariate on the response and predictors of the LME model is modeled nonparametrically via smooth unknown functions. In addition to covariate-adjusted estimation of fixed/population parameters and random effects, an estimation procedure for the variance components is also developed. Numerical properties of the proposed estimators are investigated with simulation studies. The consistency and convergence rates of the proposed estimators are also established. An application to a longitudinal data set on calcium absorption, accounting for baseline distortion from body mass index, illustrates the proposed methodology. PMID:19266053
The Non-linear Logarithm Method (NLLM) to adjust the color deviation of fluorescent images
NASA Astrophysics Data System (ADS)
Chen, Yi-Ju; Chang, Han-Chao; Huang, Kuo-Cheng; Chang, Chung-Hsing
2013-06-01
Fluorescence objects can be excited by ultraviolet (UV) light and emit a specific light of longer wavelength in biomedical experiments. However, UV light causes a deviation in the blue violet color of fluorescent images. Therefore, this study presents a color deviation adjustment method to recover the color of fluorescent image to the hue observed under normal white light, while retaining the UV light-excited fluorescent area in the reconstructed image. Based on the Gray World Method, we proposed a non-linear logarithm method (NLLM) to restore the color deviation of fluorescent images by using a yellow filter attached to the front of a digital camera lens in the experiment. Subsequently, the luminance datum of objects can be divided into the red, green, and blue (R/G/B) components which can determine the appropriate intensity of chromatic colors. In general, the datum of fluorescent images transformed into the CIE 1931 color space can be used to evaluate the quality of reconstructed images by the distribution of x-y coordinates. From the experiment, the proposed method NLLM can recover more than 90% color deviation and the reconstructed images can approach to the real color of fluorescent object illuminated by white light.
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.
NASA Astrophysics Data System (ADS)
Lin, Jin-Yuan; Lu, Yu-Sheng; Chen, Jian-Shiang
A novel global sliding-mode control (GSMC) scheme with adjustable robustness is presented in this article. The proposed scheme offers a switching function together with unperturbed system dynamics to weigh the contribution from SMC such that all of the closed-loop poles can be located within predefined regions to provide design flexibility, and the robustness of system can thus be adjusted. By this scheme, the maximal control effort and chattering level can be reduced according to designer's specifications directly. Since the switching function can initially be made to equal to zero, the adjustable performance during the entire response can be guaranteed, and the reaching condition is thus lifted. The efficacy of this scheme is demonstrated via successful implementation on a linear variable reluctance motor (LVRM) servo system. Both simulation and experimental studies further demonstrate its feasibility and effectiveness.
From linear to generalized linear mixed models: A case study in repeated measures
Technology Transfer Automated Retrieval System (TEKTRAN)
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
Generalized perceptual linear prediction features for animal vocalization analysis.
Clemins, Patrick J; Johnson, Michael T
2006-07-01
A new feature extraction model, generalized perceptual linear prediction (gPLP), is developed to calculate a set of perceptually relevant features for digital signal analysis of animal vocalizations. The gPLP model is a generalized adaptation of the perceptual linear prediction model, popular in human speech processing, which incorporates perceptual information such as frequency warping and equal loudness normalization into the feature extraction process. Since such perceptual information is available for a number of animal species, this new approach integrates that information into a generalized model to extract perceptually relevant features for a particular species. To illustrate, qualitative and quantitative comparisons are made between the species-specific model, generalized perceptual linear prediction (gPLP), and the original PLP model using a set of vocalizations collected from captive African elephants (Loxodonta africana) and wild beluga whales (Delphinapterus leucas). The models that incorporate perceptional information outperform the original human-based models in both visualization and classification tasks.
A general non-linear multilevel structural equation mixture model
Kelava, Augustin; Brandt, Holger
2014-01-01
In the past 2 decades latent variable modeling has become a standard tool in the social sciences. In the same time period, traditional linear structural equation models have been extended to include non-linear interaction and quadratic effects (e.g., Klein and Moosbrugger, 2000), and multilevel modeling (Rabe-Hesketh et al., 2004). We present a general non-linear multilevel structural equation mixture model (GNM-SEMM) that combines recent semiparametric non-linear structural equation models (Kelava and Nagengast, 2012; Kelava et al., 2014) with multilevel structural equation mixture models (Muthén and Asparouhov, 2009) for clustered and non-normally distributed data. The proposed approach allows for semiparametric relationships at the within and at the between levels. We present examples from the educational science to illustrate different submodels from the general framework. PMID:25101022
Generalized Linear Multi-Frequency Imaging in VLBI
NASA Astrophysics Data System (ADS)
Likhachev, S.; Ladygin, V.; Guirin, I.
2004-07-01
In VLBI, generalized Linear Multi-Frequency Imaging (MFI) consists of multi-frequency synthesis (MFS) and multi-frequency analysis (MFA) of the VLBI data obtained from observations on various frequencies. A set of linear deconvolution MFI algorithms is described. The algorithms make it possible to obtain high quality images interpolated on any given frequency inside any given bandwidth, and to derive reliable estimates of spectral indexes for radio sources with continuum spectrum.
Linear equations in general purpose codes for stiff ODEs
Shampine, L. F.
1980-02-01
It is noted that it is possible to improve significantly the handling of linear problems in a general-purpose code with very little trouble to the user or change to the code. In such situations analytical evaluation of the Jacobian is a lot cheaper than numerical differencing. A slight change in the point at which the Jacobian is evaluated results in a more accurate Jacobian in linear problems. (RWR)
A Matrix Approach for General Higher Order Linear Recurrences
2011-01-01
properties of linear recurrences (such as the well-known Fibonacci and Pell sequences ). In [2], Er defined k linear recurring sequences of order at...the nth term of the ith generalized order-k Fibonacci sequence . Communicated by Lee See Keong. Received: March 26, 2009; Revised: August 28, 2009...6], the author gave the generalized order-k Fibonacci and Pell (F-P) sequence as follows: For m ≥ 0, n > 0 and 1 ≤ i ≤ k uin = 2 muin−1 + u i n−2
NASA Astrophysics Data System (ADS)
Tian, J. J.; Yao, Y.
2011-03-01
We report an experimental demonstration of muliwavelength erbium-doped fiber laser with adjustable wavelength number based on a power-symmetric nonlinear optical loop mirror (NOLM) in a linear cavity. The intensity-dependent loss (IDL) induced by the NOLM is used to suppress the mode competition and realize the stable multiwavelength oscillation. The controlling of the wavelength number is achieved by adjusting the strength of IDL, which is dependent on the pump power. As the pump power increases from 40 to 408 mW, 1-7 lasing line(s) at fixed wavelength around 1601 nm are obtained. The output power stability is also investigated. The most power fluctuation of single wavelength is less than 0.9 dB, when the wavelength number is increased from 1-7.
Solution of generalized shifted linear systems with complex symmetric matrices
NASA Astrophysics Data System (ADS)
Sogabe, Tomohiro; Hoshi, Takeo; Zhang, Shao-Liang; Fujiwara, Takeo
2012-07-01
We develop the shifted COCG method [R. Takayama, T. Hoshi, T. Sogabe, S.-L. Zhang, T. Fujiwara, Linear algebraic calculation of Green's function for large-scale electronic structure theory, Phys. Rev. B 73 (165108) (2006) 1-9] and the shifted WQMR method [T. Sogabe, T. Hoshi, S.-L. Zhang, T. Fujiwara, On a weighted quasi-residual minimization strategy of the QMR method for solving complex symmetric shifted linear systems, Electron. Trans. Numer. Anal. 31 (2008) 126-140] for solving generalized shifted linear systems with complex symmetric matrices that arise from the electronic structure theory. The complex symmetric Lanczos process with a suitable bilinear form plays an important role in the development of the methods. The numerical examples indicate that the methods are highly attractive when the inner linear systems can efficiently be solved.
Beam envelope calculations in general linear coupled lattices
Chung, Moses; Qin, Hong; Groening, Lars; Xiao, Chen; Davidson, Ronald C.
2015-01-15
The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Generalized linear mixed models for meta-analysis.
Platt, R W; Leroux, B G; Breslow, N
1999-03-30
We examine two strategies for meta-analysis of a series of 2 x 2 tables with the odds ratio modelled as a linear combination of study level covariates and random effects representing between-study variation. Penalized quasi-likelihood (PQL), an approximate inference technique for generalized linear mixed models, and a linear model fitted by weighted least squares to the observed log-odds ratios are used to estimate regression coefficients and dispersion parameters. Simulation results demonstrate that both methods perform adequate approximate inference under many conditions, but that neither method works well in the presence of highly sparse data. Under certain conditions with small cell frequencies the PQL method provides better inference.
Credibility analysis of risk classes by generalized linear model
NASA Astrophysics Data System (ADS)
Erdemir, Ovgucan Karadag; Sucu, Meral
2016-06-01
In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.
A general theory of linear cosmological perturbations: bimetric theories
NASA Astrophysics Data System (ADS)
Lagos, Macarena; Ferreira, Pedro G.
2017-01-01
We implement the method developed in [1] to construct the most general parametrised action for linear cosmological perturbations of bimetric theories of gravity. Specifically, we consider perturbations around a homogeneous and isotropic background, and identify the complete form of the action invariant under diffeomorphism transformations, as well as the number of free parameters characterising this cosmological class of theories. We discuss, in detail, the case without derivative interactions, and compare our results with those found in massive bigravity.
Residuals analysis of the generalized linear models for longitudinal data.
Chang, Y C
2000-05-30
The generalized estimation equation (GEE) method, one of the generalized linear models for longitudinal data, has been used widely in medical research. However, the related sensitivity analysis problem has not been explored intensively. One of the possible reasons for this was due to the correlated structure within the same subject. We showed that the conventional residuals plots for model diagnosis in longitudinal data could mislead a researcher into trusting the fitted model. A non-parametric method, named the Wald-Wolfowitz run test, was proposed to check the residuals plots both quantitatively and graphically. The rationale proposedin this paper is well illustrated with two real clinical studies in Taiwan.
Linear spin-2 fields in most general backgrounds
NASA Astrophysics Data System (ADS)
Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael
2016-04-01
We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.
Comparative Study of Algorithms for Automated Generalization of Linear Objects
NASA Astrophysics Data System (ADS)
Azimjon, S.; Gupta, P. K.; Sukhmani, R. S. G. S.
2014-11-01
Automated generalization, rooted from conventional cartography, has become an increasing concern in both geographic information system (GIS) and mapping fields. All geographic phenomenon and the processes are bound to the scale, as it is impossible for human being to observe the Earth and the processes in it without decreasing its scale. To get optimal results, cartographers and map-making agencies develop set of rules and constraints, however these rules are under consideration and topic for many researches up until recent days. Reducing map generating time and giving objectivity is possible by developing automated map generalization algorithms (McMaster and Shea, 1988). Modification of the scale traditionally is a manual process, which requires knowledge of the expert cartographer, and it depends on the experience of the user, which makes the process very subjective as every user may generate different map with same requirements. However, automating generalization based on the cartographic rules and constrains can give consistent result. Also, developing automated system for map generation is the demand of this rapid changing world. The research that we have conveyed considers only generalization of the roads, as it is one of the indispensable parts of a map. Dehradun city, Uttarakhand state of India was selected as a study area. The study carried out comparative study of the generalization software sets, operations and algorithms available currently, also considers advantages and drawbacks of the existing software used worldwide. Research concludes with the development of road network generalization tool and with the final generalized road map of the study area, which explores the use of open source python programming language and attempts to compare different road network generalization algorithms. Thus, the paper discusses the alternative solutions for automated generalization of linear objects using GIS-technologies. Research made on automated of road network
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D.; Kühn, Oliver
2015-06-01
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.
Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.
Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young
2016-01-01
Introduction. We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods. We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results. The real values and the PACS measurement changes according to tilt value have no significant correlations (p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements (p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion. Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.
Method of Individual Adjustment for 3D CT Analysis: Linear Measurement
Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae
2016-01-01
Introduction. We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods. We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results. The real values and the PACS measurement changes according to tilt value have no significant correlations (p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements (p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion. Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction. PMID:28070517
NASA Astrophysics Data System (ADS)
Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.
2015-11-01
The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas
The rotational feedback on linear-momentum balance in glacial isostatic adjustment
NASA Astrophysics Data System (ADS)
Martinec, Zdenek; Hagedoorn, Jan
2015-04-01
The influence of changes in surface ice-mass redistribution and associated viscoelastic response of the Earth, known as glacial-isostatic adjustment (GIA), on the Earth's rotational dynamics has long been known. Equally important is the effect of the changes in the rotational dynamics on the viscoelastic deformation of the Earth. This signal, known as the rotational feedback, or more precisely, the rotational feedback on the sea-level equation, has been mathematically described by the sea-level equation extended for the term that is proportional to perturbation in the centrifugal potential and the second-degree tidal Love number. The perturbation in the centrifugal force due to changes in the Earth's rotational dynamics enters not only into the sea-level equation, but also into the conservation law of linear momentum such that the internal viscoelastic force, the perturbation in the gravitational force and the perturbation in the centrifugal force are in balance. Adding the centrifugal-force perturbation to the linear-momentum balance creates an additional rotational feedback on the viscoelastic deformations of the Earth. We term this feedback mechanism as the rotational feedback on the linear-momentum balance. We extend both the time-domain method for modelling the GIA response of laterally heterogeneous earth models and the traditional Laplace-domain method for modelling the GIA-induced rotational response to surface loading by considering the rotational feedback on linear-momentum balance. The correctness of the mathematical extensions of the methods is validated numerically by comparing the polar motion response to the GIA process and the rotationally-induced degree 2 and order 1 spherical harmonic component of the surface vertical displacement and gravity field. We present the difference between the case where the rotational feedback on linear-momentum balance is considered against that where it is not. Numerical simulations show that the resulting difference
Extracting Embedded Generalized Networks from Linear Programming Problems.
1984-09-01
E EXTRACTING EMBEDDED GENERALIZED NETWORKS FROM LINEAR PROGRAMMING PROBLEMS by Gerald G. Brown * . ___Richard D. McBride * R. Kevin Wood LcL7...authorized. EA Gerald ’Brown Richar-rD. McBride 46;val Postgrduate School University of Southern California Monterey, California 93943 Los Angeles...REOT UBE . OV S.SF- PERFOING’ CAORG soN UER. 7. AUTNOR(a) S. CONTRACT ON GRANT NUME111() Gerald G. Brown Richard D. McBride S. PERFORMING ORGANIZATION
Generalization of continuous-variable quantum cloning with linear optics
Zhai Zehui; Guo Juan; Gao Jiangrui
2006-05-15
We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.
Generalized space and linear momentum operators in quantum mechanics
Costa, Bruno G. da
2014-06-15
We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p{sup ^}{sub q}, and its canonically conjugate deformed position operator x{sup ^}{sub q}. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.
General quantum constraints on detector noise in continuous linear measurements
NASA Astrophysics Data System (ADS)
Miao, Haixing
2017-01-01
In quantum sensing and metrology, an important class of measurement is the continuous linear measurement, in which the detector is coupled to the system of interest linearly and continuously in time. One key aspect involved is the quantum noise of the detector, arising from quantum fluctuations in the detector input and output. It determines how fast we acquire information about the system and also influences the system evolution in terms of measurement backaction. We therefore often categorize it as the so-called imprecision noise and quantum backaction noise. There is a general Heisenberg-like uncertainty relation that constrains the magnitude of and the correlation between these two types of quantum noise. The main result of this paper is to show that, when the detector becomes ideal, i.e., at the quantum limit with minimum uncertainty, not only does the uncertainty relation takes the equal sign as expected, but also there are two new equalities. This general result is illustrated by using the typical cavity QED setup with the system being either a qubit or a mechanical oscillator. Particularly, the dispersive readout of a qubit state, and the measurement of mechanical motional sideband asymmetry are considered.
Generalized linear mixed model for segregation distortion analysis
2011-01-01
Background Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. Results We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F2 mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Conclusions Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals. PMID:22078575
A new family of gauges in linearized general relativity
NASA Astrophysics Data System (ADS)
Esposito, Giampiero; Stornaiolo, Cosimo
2000-05-01
For vacuum Maxwell theory in four dimensions, a supplementary condition exists (due to Eastwood and Singer) which is invariant under conformal rescalings of the metric, in agreement with the conformal symmetry of the Maxwell equations. Thus, starting from the de Donder gauge, which is not conformally invariant but is the gravitational counterpart of the Lorenz gauge, one can consider, led by formal analogy, a new family of gauges in general relativity, which involve fifth-order covariant derivatives of metric perturbations. The admissibility of such gauges in the classical theory is first proven in the cases of linearized theory about flat Euclidean space or flat Minkowski spacetime. In the former, the general solution of the equation for the fulfillment of the gauge condition after infinitesimal diffeomorphisms involves a 3-harmonic 1-form and an inverse Fourier transform. In the latter, one needs instead the kernel of powers of the wave operator, and a contour integral. The analysis is also used to put restrictions on the dimensionless parameter occurring in the DeWitt supermetric, while the proof of admissibility is generalized to a suitable class of curved Riemannian backgrounds. Eventually, a non-local construction of the tensor field is obtained which makes it possible to achieve conformal invariance of the above gauges.
Optimization in generalized linear models: A case study
NASA Astrophysics Data System (ADS)
Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina
2016-06-01
The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.
NASA Astrophysics Data System (ADS)
Caron, L.; Métivier, L.; Greff-Lefftz, M.; Fleitout, L.; Rouby, H.
2017-02-01
Glacial Isostatic Adjustment (GIA) models commonly assume a mantle with a viscoelastic Maxwell rheology and a fixed ice history model. Here, we use a Bayesian Monte Carlo approach with a Markov Chain formalism to invert the global GIA signal simultaneously for the mechanical properties of the mantle and the volumes of the ice sheets, using as starting ice models two previously published ice histories. Two stress relaxing rheologies are considered: Burgers and Maxwell linear viscoelasticities. A total of 5720 global paleo sea levels records are used, covering the last 35kyr. Our goal is not only to seek the model best fitting this data set, but also to determine and display the range of possible solutions with their respective probability of explaining the data. In all cases our a posteriori probability maps exhibit the classic character of solutions for GIA-determined mantle viscosity with two distinct peaks. What is new in our treatment is the presence of the bi-viscous Burgers rheology and the fact that we invert rheology jointly with ice history, in combination with the greatly expanded paleo sea level records. The solutions tend to be characterized by an upper mantle viscosity of around 5 × 1020Pa.s with one preferred lower mantle viscosities at 3 × 1021Pa.s and the other more than 2 × 1022Pa.s, a rather classical pairing. Best-fitting models depend upon the starting ice history and the stress relaxing law. A first peak (P1) has the highest probability only in the case with a Maxwell rheology and ice history based on ICE-5G, while the second peak (P2) is favoured for ANU-based ice history or Burgers stress relaxation. The latter solution also may satisfy lower mantle viscosity inferences from long-term geodynamics and gravity gradient anomalies over Laurentia. P2 is also consistent with large Laurentian and Fennoscandian ice-sheet volumes at the Last Glacial Maximum (LGM) and smaller LGM Antarctic ice volume than in either ICE-5G or ANU. Exploration of a bi
Process Setting through General Linear Model and Response Surface Method
NASA Astrophysics Data System (ADS)
Senjuntichai, Angsumalin
2010-10-01
The objective of this study is to improve the efficiency of the flow-wrap packaging process in soap industry through the reduction of defectives. At the 95% confidence level, with the regression analysis, the sealing temperature, temperatures of upper and lower crimper are found to be the significant factors for the flow-wrap process with respect to the number/percentage of defectives. Twenty seven experiments have been designed and performed according to three levels of each controllable factor. With the general linear model (GLM), the suggested values for the sealing temperature, temperatures of upper and lower crimpers are 185, 85 and 85° C, respectively while the response surface method (RSM) provides the optimal process conditions at 186, 89 and 88° C. Due to different assumptions between percentage of defective and all three temperature parameters, the suggested conditions from the two methods are then slightly different. Fortunately, the estimated percentage of defectives at 5.51% under GLM process condition and the predicted percentage of defectives at 4.62% under RSM process condition are not significant different. But at 95% confidence level, the percentage of defectives under RSM condition can be much lower approximately 2.16% than those under GLM condition in accordance with wider variation. Lastly, the percentages of defectives under the conditions suggested by GLM and RSM are reduced by 55.81% and 62.95%, respectively.
Kizilkaya, Kadir; Tempelman, Robert J
2005-01-01
We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM) in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM) analysis of birth weight (BW) and the cumulative probit mixed model (CPMM) analysis of calving ease (CE). The deviance information criterion (DIC) was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline) were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values. PMID:15588567
A general protocol to afford enantioenriched linear homoprenylic amines.
Bosque, Irene; Foubelo, Francisco; Gonzalez-Gomez, Jose C
2013-11-21
The reaction of a readily obtained chiral branched homoprenylamonium salt with a range of aldehydes, including aliphatic substrates, affords the corresponding linear isomers in good yields and enantioselectivities.
Generalized signaling for control: evidence from postconflict and posterror performance adjustments.
Cho, Raymond Y; Orr, Joseph M; Cohen, Jonathan D; Carter, Cameron S
2009-08-01
Goal-directed behavior requires cognitive control to effect online adjustments in response to ongoing processing demands. How signaling for these adjustments occurs has been a question of much interest. A basic question regarding the architecture of the cognitive control system is whether such signaling for control is specific to task context or generalizes across contexts. In this study, the authors explored this issue using a stimulus-response compatibility paradigm. They examined trial-to-trial adjustments, specifically, the findings that incompatible trials elicit improved performance on subsequent incompatible trials and that responses are slower after errors. The critical question was, Do such control effects-typically observed within a single task context-occur across task contexts? The paradigm involved 2 orthogonal, stimulus-response sets: Stimuli in the horizontal direction mapped only to responses in the horizontal direction, and likewise for the vertical direction. Cues indicated that either compatible (same direction as stimulus) or incompatible (opposite to stimulus) responses were required. The results showed that trial-to-trial adjustments exist for both direction-repeat and direction-switch trials, demonstrating that signaling for control adjustments can extend beyond the task context within which they arise.
Connections between Generalizing and Justifying: Students' Reasoning with Linear Relationships
ERIC Educational Resources Information Center
Ellis, Amy B.
2007-01-01
Research investigating algebra students' abilities to generalize and justify suggests that they experience difficulty in creating and using appropriate generalizations and proofs. Although the field has documented students' errors, less is known about what students do understand to be general and convincing. This study examines the ways in which…
Abad, Cesar C. C.; Barros, Ronaldo V.; Bertuzzi, Romulo; Gagliardi, João F. L.; Lima-Silva, Adriano E.; Lambert, Mike I.
2016-01-01
Abstract The aim of this study was to verify the power of VO2max, peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO2max and PTV; 2) a constant submaximal run at 12 km·h−1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO2max, PTV and RE) and adjusted variables (VO2max0.72, PTV0.72 and RE0.60) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO2max. Significant correlations (p < 0.01) were found between 10 km running time and adjusted and unadjusted RE and PTV, providing models with effect size > 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV0.72 and RE0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation. PMID:28149382
Rossi, D J; Kress, D D; Tess, M W; Burfening, P J
1992-05-01
Standard linear adjustment of weaning weight to a constant age has been shown to introduce bias in the adjusted weight due to nonlinear growth from birth to weaning of beef calves. Ten years of field records from the five strains of Beefbooster Cattle Alberta Ltd. seed stock herds were used to investigate the use of correction factors to adjust standard 180-d weight (WT180) for this bias. Statistical analyses were performed within strain and followed three steps: 1) the full data set was split into an estimation set (ES) and a validation set (VS), 2) WT180 from the ES was used to develop estimates of correction factors using a model including herd (H), year (YR), age of dam (DA), sex of calf (S), all two and three-way interactions, and any significant linear and quadratic covariates of calf age at weaning deviated from 180 d (DEVCA) and interactions between DEVCA and DA, S or DA x S, and 3) significant DEVCA coefficients were used to correct WT180 from the VS, then WT180 and the corrected weight (WTCOR) from the VS were analyzed with the same model as in Step 2 and significance of DEVCA terms were compared. Two types of data splitting were used. Adjusted R2 was calculated to describe the proportion of total variation of DEVCA terms explained for WT180 from the ES. The DEVCA terms explained .08 to 1.54% of the total variation for the five strains. Linear and quadratic correction factors were both positive and negative. Bias in WT180 from the ES within 180 +/- 35 d of age ranged from 2.8 to 21.7 kg.(ABSTRACT TRUNCATED AT 250 WORDS)
Adaptive adjustment of the generalization-discrimination balance in larval Drosophila.
Mishra, Dushyant; Louis, Matthieu; Gerber, Bertram
2010-09-01
Learnt predictive behavior faces a dilemma: predictive stimuli will never 'replay' exactly as during the learning event, requiring generalization. In turn, minute differences can become meaningful, prompting discrimination. To provide a study case for an adaptive adjustment of this generalization-discrimination balance, the authors ask whether Drosophila melanogaster larvae are able to either generalize or discriminate between two odors (1-octen-3-ol and 3-octanol), depending on the task. The authors find that after discriminatively rewarding one but not the other odor, larvae show conditioned preference for the rewarded odor. On the other hand, no odor specificity is observed after nondiscriminative training, even if the test involves a choice between both odors. Thus, for this odor pair at least, discrimination training is required to confer an odor-specific memory trace. This requires that there is at least some difference in processing between the two odors already at the beginning of the training. Therefore, as a default, there is a small yet salient difference in processing between 1-octen-3-ol and 3-octanol; this difference is ignored after nondiscriminative training (generalization), whereas it is accentuated by odor-specific reinforcement (discrimination). Given that, as the authors show, both faculties are lost in anosmic Or83b(1) mutants, this indicates an adaptive adjustment of the generalization-discrimination balance in larval Drosophila, taking place downstream of Or83b-expressing sensory neurons.
Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs
ERIC Educational Resources Information Center
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-01-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…
A General Linear Method for Equating with Small Samples
ERIC Educational Resources Information Center
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
On the Feasibility of a Generalized Linear Program
1989-03-01
generealized linear program by applying the same algorithm to a "phase-one" problem without requiring that the initial basic feasible solution to the latter be non-degenerate. secUrMTY C.AMlIS CAYI S OP ?- PAeES( UII -W & ,
Guisan, A.; Edwards, T.C.; Hastie, T.
2002-01-01
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.
Generalizing a categorization of students' interpretations of linear kinematics graphs
NASA Astrophysics Data System (ADS)
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-06-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.
Hobbs, Brian P; Sargent, Daniel J; Carlin, Bradley P
2012-08-28
Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model.
Generalized linear IgA dermatosis with palmar involvement.
Norris, Ivy N; Haeberle, M Tye; Callen, Jeffrey P; Malone, Janine C
2015-09-17
Linear IgA bullous dermatosis (LABD) is a sub-epidermal blistering disorder characterized by deposition of IgA along the basement membrane zone (BMZ) as detected by immunofluorescence microscopy. The diagnosis is made by clinicopathologic correlation with immunofluorescence confirmation. Differentiation from other bullous dermatoses is important because therapeutic measures differ. Prompt initiation of the appropriate therapies can have a major impact on outcomes. We present three cases with prominent palmar involvement to alert the clinician of this potential physical exam finding and to consider LABD in the right context.
Johnson, Glen D; Mesler, Kristine; Kacica, Marilyn A
2017-02-06
Objective The objective is to estimate community needs with respect to risky adolescent sexual behavior in a way that is risk-adjusted for multiple community factors. Methods Generalized linear mixed modeling was applied for estimating teen pregnancy and sexually transmitted disease (STD) incidence by postal ZIP code in New York State, in a way that adjusts for other community covariables and residual spatial autocorrelation. A community needs index was then obtained by summing the risk-adjusted estimates of pregnancy and STD cases. Results Poisson regression with a spatial random effect was chosen among competing modeling approaches. Both the risk-adjusted caseloads and rates were computed for ZIP codes, which allowed risk-based prioritization to help guide funding decisions for a comprehensive adolescent pregnancy prevention program. Conclusions This approach provides quantitative evidence of community needs with respect to risky adolescent sexual behavior, while adjusting for other community-level variables and stabilizing estimates in areas with small populations. Therefore, it was well accepted by the affected groups and proved valuable for program planning. This methodology may also prove valuable for follow up program evaluation. Current research is directed towards further improving the statistical modeling approach and applying to different health and behavioral outcomes, along with different predictor variables.
NASA Astrophysics Data System (ADS)
Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.
2016-07-01
To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
Inverting Glacial Isostatic Adjustment beyond linear viscoelasticity using the Burgers rheology
NASA Astrophysics Data System (ADS)
Caron, Lambert; Greff-Lefftz, Marianne; Fleitout, Luce; Métivier, Laurent; Rouby, Hélène
2015-04-01
In Glacial Isostatic Adjustment (GIA) inverse modeling, the usual assumption for the mantle rheology is the Maxwell model, which exhibits constant viscosity over time. However, mineral physics experiments and post-seismic observations show evidence of a transient component in the deformation of the shallow mantle, with a short-term viscosity lower than the long-term one. In these studies, the resulting rheology is modeled by a Burgers material: such rheology is indeed expected as the mantle is a mixture of materials with different viscosities. We propose to apply this rheology for the whole viscoelastic mantle, and, using a Bayesian MCMC inverse formalism for GIA during the last glacial cycle, study its impact on estimations of viscosity values, elastic thickness of the lithosphere, and ice distribution. To perform this inversion, we use a global dataset of sea level records, the geological constraints of ice-sheet margins, and present-day GPS data as well as satellite gravimetry. Our ambition is to present not only the best fitting model, but also the range of possible solutions (within the explored space of parameters) with their respective probability of explaining the data. Our results show that the Burgers model is able to fit the dataset as well as the Maxwell model, but would imply a larger lower mantle viscosity, thicker ice sheets over Fennoscandia and Canada, and thinner ice sheets over Antarctica and Greenland.
Computer analysis of general linear networks using digraphs.
NASA Technical Reports Server (NTRS)
Mcclenahan, J. O.; Chan, S.-P.
1972-01-01
Investigation of the application of digraphs in analyzing general electronic networks, and development of a computer program based on a particular digraph method developed by Chen. The Chen digraph method is a topological method for solution of networks and serves as a shortcut when hand calculations are required. The advantage offered by this method of analysis is that the results are in symbolic form. It is limited, however, by the size of network that may be handled. Usually hand calculations become too tedious for networks larger than about five nodes, depending on how many elements the network contains. Direct determinant expansion for a five-node network is a very tedious process also.
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors
Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.
Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.
Determinants of hospital closure in South Korea: use of a hierarchical generalized linear model.
Noh, Maengseok; Lee, Youngjo; Yun, Sung-Cheol; Lee, Sang-Il; Lee, Moo-Song; Khang, Young-Ho
2006-11-01
Understanding causes of hospital closure is important if hospitals are to survive and continue to fulfill their missions as the center for health care in their neighborhoods. Knowing which hospitals are most susceptible to closure can be of great use for hospital administrators and others interested in hospital performance. Although prior studies have identified a range of factors associated with increased risk of hospital closure, most are US-based and do not directly relate to health care systems in other countries. We examined determinants of hospital closure in a nationally representative sample: 805 hospitals established in South Korea before 1996 were examined-hospitals established in 1996 or after were excluded. Major organizational changes (survival vs. closure) were followed for all South Korean hospitals from 1996 through 2002. With the use of a hierarchical generalized linear model, a frailty model was used to control correlation among repeated measurements for risk factors for hospital closure. Results showed that ownership and hospital size were significantly associated with hospital closure. Urban hospitals were less likely to close than rural hospitals. However, the urban location of a hospital was not associated with hospital closure after adjustment for the proportion of elderly. Two measures for hospital competition (competitive beds and 1-Hirshman--Herfindalh index) were positively associated with risk of hospital closure before and after adjustment for confounders. In addition, annual 10% change in competitive beds was significantly predictive of hospital closure. In conclusion, yearly trends in hospital competition as well as the level of hospital competition each year affected hospital survival. Future studies need to examine the contribution of internal factors such as management strategies and financial status to hospital closure in South Korea.
Sigurdson, J F; Wallander, J; Sund, A M
2014-10-01
The aim was to examine prospectively associations between bullying involvement at 14-15 years of age and self-reported general health and psychosocial adjustment in young adulthood, at 26-27 years of age. A large representative sample (N=2,464) was recruited and assessed in two counties in Mid-Norway in 1998 (T1) and 1999/2000 (T2) when the respondents had a mean age of 13.7 and 14.9, respectively, leading to classification as being bullied, bully-victim, being aggressive toward others or non-involved. Information about general health and psychosocial adjustment was gathered at a follow-up in 2012 (T4) (N=1,266) with a respondent mean age of 27.2. Logistic regression and ANOVA analyses showed that groups involved in bullying of any type in adolescence had increased risk for lower education as young adults compared to those non-involved. The group aggressive toward others also had a higher risk of being unemployed and receiving any kind of social help. Compared with the non-involved, those being bullied and bully-victims had increased risk of poor general health and high levels of pain. Bully-victims and those aggressive toward others during adolescence subsequently had increased risk of tobacco use and lower job functioning than non-involved. Further, those being bullied and aggressive toward others had increased risk of illegal drug use. Relations to live-in spouse/partner were poorer among those being bullied. Involvement in bullying, either as victim or perpetrator, has significant social costs even 12 years after the bullying experience. Accordingly, it will be important to provide early intervention for those involved in bullying in adolescence.
Profile local linear estimation of generalized semiparametric regression model for longitudinal data
Sun, Liuquan; Zhou, Jie
2013-01-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A K -fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example. PMID:23471814
NASA Astrophysics Data System (ADS)
Fan, Ya-Jing; Cao, Huai-Xin; Meng, Hui-Xian; Chen, Liang
2016-12-01
The uncertainty principle in quantum mechanics is a fundamental relation with different forms, including Heisenberg's uncertainty relation and Schrödinger's uncertainty relation. In this paper, we prove a Schrödinger-type uncertainty relation in terms of generalized metric adjusted skew information and correlation measure by using operator monotone functions, which reads, U_ρ ^{(g,f)}(A)U_ρ ^{(g,f)}(B)≥ f(0)^2l/k| Corr_ρ ^{s(g,f)}(A,B)| ^2 for some operator monotone functions f and g, all n-dimensional observables A, B and a non-singular density matrix ρ . As applications, we derive some new uncertainty relations for Wigner-Yanase skew information and Wigner-Yanase-Dyson skew information.
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Stiros, Stathis
2012-11-01
The adjustment of systems of highly non-linear, redundant equations, deriving from observations of certain geophysical processes and geodetic data cannot be based on conventional least-squares techniques, and is based on various numerical inversion techniques. Still these techniques lead to solutions trapped in local minima, to correlated estimates and to solution with poor error control. To overcome these problems, we propose an alternative numerical-topological approach inspired by lighthouse beacon navigation, usually used in 2-D, low-accuracy applications. In our approach, an m-dimensional grid G of points around the real solution (an m-dimensional vector) is at first specified. Then, for each equation an uncertainty is assigned to the corresponding measurement, and the sets of the grid points which satisfy the condition are detected. This process is repeated for all equations, and the common section A of the sets of grid points is defined. From this set of grid points, which define a space including the real solution, we compute its center of weight, which corresponds to an estimate of the solution, and its variance-covariance matrix. An optimal solution can be obtained through optimization of the uncertainty in each observation. The efficiency of the overall process was assessed in comparison with conventional least squares adjustment.
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-04-03
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study.
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.
Generalized Functional Linear Models for Gene-based Case-Control Association Studies
Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao
2014-01-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683
Generalized functional linear models for gene-based case-control association studies.
Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao
2014-11-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses.
ERIC Educational Resources Information Center
Schluchter, Mark D.
2008-01-01
In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Carrasco, Josep L
2010-09-01
The classical concordance correlation coefficient (CCC) to measure agreement among a set of observers assumes data to be distributed as normal and a linear relationship between the mean and the subject and observer effects. Here, the CCC is generalized to afford any distribution from the exponential family by means of the generalized linear mixed models (GLMMs) theory and applied to the case of overdispersed count data. An example of CD34+ cell count data is provided to show the applicability of the procedure. In the latter case, different CCCs are defined and applied to the data by changing the GLMM that fits the data. A simulation study is carried out to explore the behavior of the procedure with a small and moderate sample size.
Tsai, Miao-Yu
2015-03-01
The problem of variable selection in the generalized linear-mixed models (GLMMs) is pervasive in statistical practice. For the purpose of variable selection, many methodologies for determining the best subset of explanatory variables currently exist according to the model complexity and differences between applications. In this paper, we develop a "higher posterior probability model with bootstrap" (HPMB) approach to select explanatory variables without fitting all possible GLMMs involving a small or moderate number of explanatory variables. Furthermore, to save computational load, we propose an efficient approximation approach with Laplace's method and Taylor's expansion to approximate intractable integrals in GLMMs. Simulation studies and an application of HapMap data provide evidence that this selection approach is computationally feasible and reliable for exploring true candidate genes and gene-gene associations, after adjusting for complex structures among clusters.
de Souza, Juliana Bottoni; Reisen, Valdério Anselmo; Santos, Jane Méri; Franco, Glaura Conceição
2014-01-01
OBJECTIVE To analyze the association between concentrations of air pollutants and admissions for respiratory causes in children. METHODS Ecological time series study. Daily figures for hospital admissions of children aged < 6, and daily concentrations of air pollutants (PM10, SO2, NO2, O3 and CO) were analyzed in the Região da Grande Vitória, ES, Southeastern Brazil, from January 2005 to December 2010. For statistical analysis, two techniques were combined: Poisson regression with generalized additive models and principal model component analysis. Those analysis techniques complemented each other and provided more significant estimates in the estimation of relative risk. The models were adjusted for temporal trend, seasonality, day of the week, meteorological factors and autocorrelation. In the final adjustment of the model, it was necessary to include models of the Autoregressive Moving Average Models (p, q) type in the residuals in order to eliminate the autocorrelation structures present in the components. RESULTS For every 10:49 μg/m3 increase (interquartile range) in levels of the pollutant PM10 there was a 3.0% increase in the relative risk estimated using the generalized additive model analysis of main components-seasonal autoregressive – while in the usual generalized additive model, the estimate was 2.0%. CONCLUSIONS Compared to the usual generalized additive model, in general, the proposed aspect of generalized additive model − principal component analysis, showed better results in estimating relative risk and quality of fit. PMID:25119940
Linear and nonlinear associations between general intelligence and personality in Project TALENT.
Major, Jason T; Johnson, Wendy; Deary, Ian J
2014-04-01
Research on the relations of personality traits to intelligence has primarily been concerned with linear associations. Yet, there are no a priori reasons why linear relations should be expected over nonlinear ones, which represent a much larger set of all possible associations. Using 2 techniques, quadratic and generalized additive models, we tested for linear and nonlinear associations of general intelligence (g) with 10 personality scales from Project TALENT (PT), a nationally representative sample of approximately 400,000 American high school students from 1960, divided into 4 grade samples (Flanagan et al., 1962). We departed from previous studies, including one with PT (Reeve, Meyer, & Bonaccio, 2006), by modeling latent quadratic effects directly, controlling the influence of the common factor in the personality scales, and assuming a direction of effect from g to personality. On the basis of the literature, we made 17 directional hypotheses for the linear and quadratic associations. Of these, 53% were supported in all 4 male grades and 58% in all 4 female grades. Quadratic associations explained substantive variance above and beyond linear effects (mean R² between 1.8% and 3.6%) for Sociability, Maturity, Vigor, and Leadership in males and Sociability, Maturity, and Tidiness in females; linear associations were predominant for other traits. We discuss how suited current theories of the personality-intelligence interface are to explain these associations, and how research on intellectually gifted samples may provide a unique way of understanding them. We conclude that nonlinear models can provide incremental detail regarding personality and intelligence associations.
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
ERIC Educational Resources Information Center
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
NASA Astrophysics Data System (ADS)
Volk, Wolfram; Suh, Joungsik
2013-12-01
The prediction of formability is one of the most important tasks in sheet metal process simulation. The common criterion in industrial applications is the Forming Limit Curve (FLC). The big advantage of FLCs is the easy interpretation of simulation or measurement data in combination with an ISO standard for the experimental determination. However, the conventional FLCs are limited to almost linear and unbroken strain paths, i.e. deformation histories with non-linear strain increments often lead to big differences in comparison to the prediction of the FLC. In this paper a phenomenological approach, the so-called Generalized Forming Limit Concept (GFLC), is introduced to predict the localized necking on arbitrary deformation history with unlimited number of non-linear strain increments. The GFLC consists of the conventional FLC and an acceptable number of experiments with bi-linear deformation history. With the idea of the new defined "Principle of Equivalent Pre-Forming" every deformation state built up of two linear strain increments can be transformed to a pure linear strain path with the same used formability of the material. In advance this procedure can be repeated as often as necessary. Therefore, it allows a robust and cost effective analysis of beginning instability in Finite Element Analysis (FEA) for arbitrary deformation histories. In addition, the GFLC is fully downwards compatible to the established FLC for pure linear strain paths.
ERIC Educational Resources Information Center
Canivez, Gary L.
2006-01-01
Replication of the core syndrome factor structure of the "Adjustment Scales for Children and Adolescents" (ASCA; P.A. McDermott, N.C. Marston, & D.H. Stott, 1993) is reported for a sample of 183 Native American Indian (Ojibwe) children and adolescents from North Central Minnesota. The six ASCA core syndromes produced an identical…
ERIC Educational Resources Information Center
Favez, N.; Reicherts, M.
2008-01-01
The aim of this research is to assess the relative influence of mothers' coping strategies in everyday life and mothers' specific coping acts on toddlers' adjustment behavior to pain and distress during a routine immunization. The population is 41 mothers with toddlers (23 girls, 18 boys; mean age, 22.7 months) undergoing a routine immunization in…
Lai, Zhi-Hui; Leng, Yong-Gang
2015-08-28
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.
Lai, Zhi-Hui; Leng, Yong-Gang
2015-01-01
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671
Implementing general quantum measurements on linear optical and solid-state qubits
NASA Astrophysics Data System (ADS)
Ota, Yukihiro; Ashhab, Sahel; Nori, Franco
2013-03-01
We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.
The linear stability of plane stagnation-point flow against general disturbances
NASA Astrophysics Data System (ADS)
Brattkus, K.; Davis, S. H.
1991-02-01
The linear-stability theory of plane stagnation-point flow against an infinite flat plate is re-examined. Disturbances are generalized from those of Goertler type to include other types of variations along the plate. It is shown that Hiemenz flow is linearly stable and that the Goertler-type modes are those that decay slowest. This work then rationalizes the use of such self-similar disturbances on Hiemenz flow and shows how questions of disturbance structure can be approached on other self-similar flows.
The linear stability of plane stagnation-point flow against general disturbances
NASA Technical Reports Server (NTRS)
Brattkus, K.; Davis, S. H.
1991-01-01
The linear-stability theory of plane stagnation-point flow against an infinite flat plate is re-examined. Disturbances are generalized from those of Goertler type to include other types of variations along the plate. It is shown that Hiemenz flow is linearly stable and that the Goertler-type modes are those that decay slowest. This work then rationalizes the use of such self-similar disturbances on Hiemenz flow and shows how questions of disturbance structure can be approached on other self-similar flows.
Estimate of influenza cases using generalized linear, additive and mixed models.
Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M
2015-01-01
We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.
Semiparametric Analysis of Heterogeneous Data Using Varying-Scale Generalized Linear Models.
Xie, Minge; Simpson, Douglas G; Carroll, Raymond J
2008-01-01
This article describes a class of heteroscedastic generalized linear regression models in which a subset of the regression parameters are rescaled nonparametrically, and develops efficient semiparametric inferences for the parametric components of the models. Such models provide a means to adapt for heterogeneity in the data due to varying exposures, varying levels of aggregation, and so on. The class of models considered includes generalized partially linear models and nonparametrically scaled link function models as special cases. We present an algorithm to estimate the scale function nonparametrically, and obtain asymptotic distribution theory for regression parameter estimates. In particular, we establish that the asymptotic covariance of the semiparametric estimator for the parametric part of the model achieves the semiparametric lower bound. We also describe bootstrap-based goodness-of-scale test. We illustrate the methodology with simulations, published data, and data from collaborative research on ultrasound safety.
A review of linear response theory for general differentiable dynamical systems
NASA Astrophysics Data System (ADS)
Ruelle, David
2009-04-01
The classical theory of linear response applies to statistical mechanics close to equilibrium. Away from equilibrium, one may describe the microscopic time evolution by a general differentiable dynamical system, identify nonequilibrium steady states (NESS) and study how these vary under perturbations of the dynamics. Remarkably, it turns out that for uniformly hyperbolic dynamical systems (those satisfying the 'chaotic hypothesis'), the linear response away from equilibrium is very similar to the linear response close to equilibrium: the Kramers-Kronig dispersion relations hold, and the fluctuation-dispersion theorem survives in a modified form (which takes into account the oscillations around the 'attractor' corresponding to the NESS). If the chaotic hypothesis does not hold, two new phenomena may arise. The first is a violation of linear response in the sense that the NESS does not depend differentiably on parameters (but this nondifferentiability may be hard to see experimentally). The second phenomenon is a violation of the dispersion relations: the susceptibility has singularities in the upper half complex plane. These 'acausal' singularities are actually due to 'energy nonconservation': for a small periodic perturbation of the system, the amplitude of the linear response is arbitrarily large. This means that the NESS of the dynamical system under study is not 'inert' but can give energy to the outside world. An 'active' NESS of this sort is very different from an equilibrium state, and it would be interesting to see what happens for active states to the Gallavotti-Cohen fluctuation theorem.
NASA Astrophysics Data System (ADS)
Borzov, V. V.; Damaskinsky, E. V.
2017-02-01
We consider the families of polynomials P = { P n ( x)} n=0 ∞ and Q = { Q n ( x)} n=0 ∞ orthogonal on the real line with respect to the respective probability measures μ and ν. We assume that { Q n ( x)} n=0 ∞ and { P n ( x)} n=0 ∞ are connected by linear relations. In the case k = 2, we describe all pairs (P,Q) for which the algebras A P and A Q of generalized oscillators generated by { Qn(x)} n=0 ∞ and { Pn(x)} n=0 ∞ coincide. We construct generalized oscillators corresponding to pairs (P,Q) for arbitrary k ≥ 1.
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.
Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew
2015-09-01
Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.
Unified Einstein-Virasoro Master Equation in the General Non-Linear Sigma Model
Boer, J. de; Halpern, M.B.
1996-06-05
The Virasoro master equation (VME) describes the general affine-Virasoro construction $T=L^abJ_aJ_b+iD^a \\dif J_a$ in the operator algebra of the WZW model, where $L^ab$ is the inverse inertia tensor and $D^a $ is the improvement vector. In this paper, we generalize this construction to find the general (one-loop) Virasoro construction in the operator algebra of the general non-linear sigma model. The result is a unified Einstein-Virasoro master equation which couples the spacetime spin-two field $L^ab$ to the background fields of the sigma model. For a particular solution $L_G^ab$, the unified system reduces to the canonical stress tensors and conventional Einstein equations of the sigma model, and the system reduces to the general affine-Virasoro construction and the VME when the sigma model is taken to be the WZW action. More generally, the unified system describes a space of conformal field theories which is presumably much larger than the sum of the general affine-Virasoro construction and the sigma model with its canonical stress tensors. We also discuss a number of algebraic and geometrical properties of the system, including its relation to an unsolved problem in the theory of $G$-structures on manifolds with torsion.
Fitting host-parasitoid models with CV2 > 1 using hierarchical generalized linear models.
Perry, J N; Noh, M S; Lee, Y; Alston, R D; Norowi, H M; Powell, W; Rennolls, K
2000-01-01
The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density-dependent heterogeneity (HDD) to be distinguished from between-patch, host density-independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well-known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent. PMID:11416907
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
Psychosocial Experiences and Adjustment among Adult Swedes with Superior General Mental Ability
ERIC Educational Resources Information Center
Stalnacke, Jannica; Smedler, Ann-Charlotte
2011-01-01
In Sweden, special needs of high-ability individuals have received little attention. For this purpose, adult Swedes with superior general mental ability (GMA; N = 302), defined by an IQ score greater than 130 on tests of abstract reasoning, answered a questionnaire regarding their views of themselves and their giftedness. The participants also…
Random generalized linear model: a highly accurate and interpretable ensemble predictor
2013-01-01
Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760
Chen, Zhe; Purdon, Patrick L; Pierce, Eric T; Harrell, Grace; Walsh, John; Salazar, Andres F; Tavares, Casie L; Brown, Emery N; Barbieri, Riccardo
2009-01-01
Quantitative evaluation of respiratory sinus arrhythmia (RSA) may provide important information in clinical practice of anesthesia and postoperative care. In this paper, we apply a point process method to assess dynamic RSA during propofol general anesthesia. Specifically, an inverse Gaussian probability distribution is used to model the heartbeat interval, whereas the instantaneous mean is identified by a linear or bilinear bivariate regression on the previous R-R intervals and respiratory measures. The estimated second-order bilinear interaction allows us to evaluate the nonlinear component of the RSA. The instantaneous RSA gain and phase can be estimated with an adaptive point process filter. The algorithm's ability to track non-stationary dynamics is demonstrated using one clinical recording. Our proposed statistical indices provide a valuable quantitative assessment of instantaneous cardiorespiratory control and heart rate variability (HRV) during general anesthesia.
Followill, David S; Stovall, Marilyn S; Kry, Stephen F; Ibbott, Geoffrey S
2003-01-01
The shielding calculations for high energy (>10 MV) linear accelerators must include the photoneutron production within the head of the accelerator. Procedures have been described to calculate the treatment room door shielding based on the neutron source strength (Q value) for a specific accelerator and energy combination. Unfortunately, there is currently little data in the literature stating the neutron source strengths for the most widely used linear accelerators. In this study, the neutron fluence for 36 linear accelerators, including models from Varian, Siemens, Elekta/Philips, and General Electric, was measured using gold-foil activation. Several of the models and energy combinations had multiple measurements. The neutron fluence measured in the patient plane was independent of the surface area of the room, suggesting that neutron fluence is more dependent on the direct neutron fluence from the head of the accelerator than from room scatter. Neutron source strength, Q, was determined from the measured neutron fluences. As expected, Q increased with increasing photon energy. The Q values ranged from 0.02 for a 10 MV beam to 1.44(x10(12)) neutrons per photon Gy for a 25 MV beam. The most comprehensive set of neutron source strength values, Q, for the current accelerators in clinical use are presented for use in calculating room shielding.
Wave packet dynamics in one-dimensional linear and nonlinear generalized Fibonacci lattices.
Zhang, Zhenjun; Tong, Peiqing; Gong, Jiangbin; Li, Baowen
2011-05-01
The spreading of an initially localized wave packet in one-dimensional linear and nonlinear generalized Fibonacci (GF) lattices is studied numerically. The GF lattices can be classified into two classes depending on whether or not the lattice possesses the Pisot-Vijayaraghavan property. For linear GF lattices of the first class, both the second moment and the participation number grow with time. For linear GF lattices of the second class, in the regime of a weak on-site potential, wave packet spreading is close to ballistic diffusion, whereas in the regime of a strong on-site potential, it displays stairlike growth in both the second moment and the participation number. Nonlinear GF lattices are then investigated in parallel. For the first class of nonlinear GF lattices, the second moment of the wave packet still grows with time, but the corresponding participation number does not grow simultaneously. For the second class of nonlinear GF lattices, an analogous phenomenon is observed for the weak on-site potential only. For a strong on-site potential that leads to an enhanced nonlinear self-trapping effect, neither the second moment nor the participation number grows with time. The results can be useful in guiding experiments on the expansion of noninteracting or interacting cold atoms in quasiperiodic optical lattices.
Suldo, Shannon M; Shaunessy, Elizabeth; Thalji, Amanda; Michalowski, Jessica; Shaffer, Emily
2009-01-01
Navigating puberty while developing independent living skills may render adolescents particularly vulnerable to stress, which may ultimately contribute to mental health problems (Compas, Orosan, & Grant, 1993; Elgar, Arlett, & Groves, 2003). The academic transition to high school presents additional challenges as youth are required to interact with a new and larger peer group and manage greater academic expectations. For students enrolled in academically rigorous college preparatory programs, such as the International Baccalaureate (IB) program, the amount of stress perceived may be greater than typical (Suldo, Shaunessy, & Hardesty, 2008). This study investigated the environmental stressors and psychological adjustment of 162 students participating in the IB program and a comparison sample of 157 students in general education. Factor analysis indicated students experience 7 primary categories of stressors, which were examined in relation to students' adjustment specific to academic and psychological functioning. The primary source of stress experienced by IB students was related to academic requirements. In contrast, students in the general education program indicated higher levels of stressors associated with parent-child relations, academic struggles, conflict within family, and peer relations, as well as role transitions and societal problems. Comparisons of correlations between categories of stressors and students' adjustment by curriculum group reveal that students in the IB program reported more symptoms of psychopathology and reduced academic functioning as they experienced higher levels of stress, particularly stressors associated with academic requirements, transitions and societal problems, academic struggles, and extra-curricular activities. Applied implications stem from findings suggesting that students in college preparatory programs are more likely to (a) experience elevated stress related to academic demands as opposed to more typical adolescent
Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju
2014-01-01
Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973
Thermodynamic bounds and general properties of optimal efficiency and power in linear responses
NASA Astrophysics Data System (ADS)
Jiang, Jian-Hua
2014-10-01
We study the optimal exergy efficiency and power for thermodynamic systems with an Onsager-type "current-force" relationship describing the linear response to external influences. We derive, in analytic forms, the maximum efficiency and optimal efficiency for maximum power for a thermodynamic machine described by a N ×N symmetric Onsager matrix with arbitrary integer N. The figure of merit is expressed in terms of the largest eigenvalue of the "coupling matrix" which is solely determined by the Onsager matrix. Some simple but general relationships between the power and efficiency at the conditions for (i) maximum efficiency and (ii) optimal efficiency for maximum power are obtained. We show how the second law of thermodynamics bounds the optimal efficiency and the Onsager matrix and relate those bounds together. The maximum power theorem (Jacobi's Law) is generalized to all thermodynamic machines with a symmetric Onsager matrix in the linear-response regime. We also discuss systems with an asymmetric Onsager matrix (such as systems under magnetic field) for a particular situation and we show that the reversible limit of efficiency can be reached at finite output power. Cooperative effects are found to improve the figure of merit significantly in systems with multiply cross-correlated responses. Application to example systems demonstrates that the theory is helpful in guiding the search for high performance materials and structures in energy researches.
Treuer, H; Hoevels, M; Luyken, K; Gierich, A; Kocher, M; Müller, R P; Sturm, V
2000-08-01
We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.
NASA Astrophysics Data System (ADS)
Bucher, I.
1998-11-01
This paper describes the theory and algorithm allowing one to tune a multi-exciter system in order to obtain specified temporal and spatial structural response properties. Considerable effort is being put upon the desire to overcome practical difficulties and limitations as found in real-world systems. The main application that was envisaged for this algorithm is the creation of travelling vibration waves in structures. Such waves may be useful in testing and diagnostic applications or in ultrasonic motors for generating motion. The proposed method adaptively modifies a set of perturbations applied to the model so that an increasing amount of information is extracted from the system. The algorithm strives to overcome the following difficulties: (a) singular model inversion, (b) poor signal to noise ratio, (c) feedback, and (d) certain types of non-linear behaviour. High response levels, exciter-structure coupling and the inherent feedback existing in electro-mechanical systems are demonstrated to cause singularity, poor signal to noise levels and, to some extent, non linear behaviour. These phenomena pose some difficulties under operating conditions commonly encountered during dynamic testing of structures. The tuning of the multi-shaker system is approached in this work, as a non-linear optimisation problem where insight into the physical behaviour is emphasised in choosing the algorithmic strategy. The system's unknown model is inverted in an implicit manner using an automatic orthogonal and adaptive search direction. This adaptation uses the measured responses and forces at each step in order to determine the direction of progression during the tuning process. The non-linear behaviour of the exciters is compensated, in this work, by identification of the high-order (Volterra-like) transfer functions. This high-order model is than inverted allowing one to create a signal that cancels the unwanted harmonics. The proposed approach is analytically shown to converge
On relating the generalized equivalent uniform dose formalism to the linear-quadratic model.
Djajaputra, David; Wu, Qiuwen
2006-12-01
Two main approaches are commonly used in the literature for computing the equivalent uniform dose (EUD) in radiotherapy. The first approach is based on the cell-survival curve as defined in the linear-quadratic model. The second approach assumes that EUD can be computed as the generalized mean of the dose distribution with an appropriate fitting parameter. We have analyzed the connection between these two formalisms by deriving explicit formulas for the EUD which are applicable to normal distributions. From these formulas we have established an explicit connection between the two formalisms. We found that the EUD parameter has strong dependence on the parameters that characterize the distribution, namely the mean dose and the standard deviation around the mean. By computing the corresponding parameters for clinical dose distributions, which in general do not follow the normal distribution, we have shown that our results are also applicable to actual dose distributions. Our analysis suggests that caution should be used in using generalized EUD approach for reporting and analyzing dose distributions.
Model Averaging Methods for Weight Trimming in Generalized Linear Regression Models.
Elliott, Michael R
2009-03-01
In sample surveys where units have unequal probabilities of inclusion, associations between the inclusion probability and the statistic of interest can induce bias in unweighted estimates. This is true even in regression models, where the estimates of the population slope may be biased if the underlying mean model is misspecified or the sampling is nonignorable. Weights equal to the inverse of the probability of inclusion are often used to counteract this bias. Highly disproportional sample designs have highly variable weights; weight trimming reduces large weights to a maximum value, reducing variability but introducing bias. Most standard approaches are ad hoc in that they do not use the data to optimize bias-variance trade-offs. This article uses Bayesian model averaging to create "data driven" weight trimming estimators. We extend previous results for linear regression models (Elliott 2008) to generalized linear regression models, developing robust models that approximate fully-weighted estimators when bias correction is of greatest importance, and approximate unweighted estimators when variance reduction is critical.
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
Two-stage method of estimation for general linear growth curve models.
Stukel, T A; Demidenko, E
1997-06-01
We extend the linear random-effects growth curve model (REGCM) (Laird and Ware, 1982, Biometrics 38, 963-974) to study the effects of population covariates on one or more characteristics of the growth curve when the characteristics are expressed as linear combinations of the growth curve parameters. This definition includes the actual growth curve parameters (the usual model) or any subset of these parameters. Such an analysis would be cumbersome using standard growth curve methods because it would require reparameterization of the original growth curve. We implement a two-stage method of estimation based on the two-stage growth curve model used to describe the response. The resulting generalized least squares (GLS) estimator for the population parameters is consistent, asymptotically efficient, and multivariate normal when the number of individuals is large. It is also robust to model misspecification in terms of bias and efficiency of the parameter estimates compared to maximum likelihood with the usual REGCM. We apply the method to a study of factors affecting the growth rate of salmonellae in a cubic growth model, a characteristic that cannot be analyzed easily using standard techniques.
Wu, Jiayang; Cao, Pan; Hu, Xiaofeng; Jiang, Xinhong; Pan, Ting; Yang, Yuxing; Qiu, Ciyuan; Tremblay, Christine; Su, Yikai
2014-10-20
We propose and experimentally demonstrate an all-optical temporal differential-equation solver that can be used to solve ordinary differential equations (ODEs) characterizing general linear time-invariant (LTI) systems. The photonic device implemented by an add-drop microring resonator (MRR) with two tunable interferometric couplers is monolithically integrated on a silicon-on-insulator (SOI) wafer with a compact footprint of ~60 μm × 120 μm. By thermally tuning the phase shifts along the bus arms of the two interferometric couplers, the proposed device is capable of solving first-order ODEs with two variable coefficients. The operation principle is theoretically analyzed, and system testing of solving ODE with tunable coefficients is carried out for 10-Gb/s optical Gaussian-like pulses. The experimental results verify the effectiveness of the fabricated device as a tunable photonic ODE solver.
General linear codes for fault-tolerant matrix operations on processor arrays
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Abraham, J. A.
1988-01-01
Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.
Generalization of the ordinary state-based peridynamic model for isotropic linear viscoelasticity
NASA Astrophysics Data System (ADS)
Delorme, Rolland; Tabiai, Ilyass; Laberge Lebel, Louis; Lévesque, Martin
2017-02-01
This paper presents a generalization of the original ordinary state-based peridynamic model for isotropic linear viscoelasticity. The viscoelastic material response is represented using the thermodynamically acceptable Prony series approach. It can feature as many Prony terms as required and accounts for viscoelastic spherical and deviatoric components. The model was derived from an equivalence between peridynamic viscoelastic parameters and those appearing in classical continuum mechanics, by equating the free energy densities expressed in both frameworks. The model was simplified to a uni-dimensional expression and implemented to simulate a creep-recovery test. This implementation was finally validated by comparing peridynamic predictions to those predicted from classical continuum mechanics. An exact correspondence between peridynamics and the classical continuum approach was shown when the peridynamic horizon becomes small, meaning peridynamics tends toward classical continuum mechanics. This work provides a clear and direct means to researchers dealing with viscoelastic phenomena to tackle their problem within the peridynamic framework.
NASA Astrophysics Data System (ADS)
Rust, H. W.; Vrac, M.; Lengaigne, M.; Sultan, B.
2012-04-01
Changes in precipitation patterns with potentially less precipitation and an increasing risk for droughts pose a threat to water resources and agricultural yields in Senegal. Precipitation in this region is dominated by the West-African Monsoon being active from May to October, a seasonal pattern with inter-annual to decadal variability in the 20th century which is likely to be affected by climate change. We built a generalized linear model for a full spatial description of rainfall in Senegal. The model uses season, location, and a discrete set of weather types as predictors and yields a spatially continuous description of precipitation occurrences and intensities. Weather types have been defined on NCEP/NCAR reanalysis using zonal and meridional winds, as well as relative humidity. This model is suitable for downscaling precipitation, particularly precipitation occurrences relevant for drough risk mapping.
Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem
NASA Technical Reports Server (NTRS)
Lu, Huei-Iin; Robertson, Franklin R.
1999-01-01
A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.
A Bayesian approach for inducing sparsity in generalized linear models with multi-category response
2015-01-01
Background The dimension and complexity of high-throughput gene expression data create many challenges for downstream analysis. Several approaches exist to reduce the number of variables with respect to small sample sizes. In this study, we utilized the Generalized Double Pareto (GDP) prior to induce sparsity in a Bayesian Generalized Linear Model (GLM) setting. The approach was evaluated using a publicly available microarray dataset containing 99 samples corresponding to four different prostate cancer subtypes. Results A hierarchical Sparse Bayesian GLM using GDP prior (SBGG) was developed to take into account the progressive nature of the response variable. We obtained an average overall classification accuracy between 82.5% and 94%, which was higher than Support Vector Machine, Random Forest or a Sparse Bayesian GLM using double exponential priors. Additionally, SBGG outperforms the other 3 methods in correctly identifying pre-metastatic stages of cancer progression, which can prove extremely valuable for therapeutic and diagnostic purposes. Importantly, using Geneset Cohesion Analysis Tool, we found that the top 100 genes produced by SBGG had an average functional cohesion p-value of 2.0E-4 compared to 0.007 to 0.131 produced by the other methods. Conclusions Using GDP in a Bayesian GLM model applied to cancer progression data results in better subclass prediction. In particular, the method identifies pre-metastatic stages of prostate cancer with substantially better accuracy and produces more functionally relevant gene sets. PMID:26423345
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) .
NASA Astrophysics Data System (ADS)
Asejczyk-Widlicka, M.; Srodka, D. W.; Kasprzak, H.; Iskander, D. R.
In general, visual acuity does not change with variations in intraocular pressure. Experiments in vitro as well as our clinical findings lead us to hypothesise that the eyeball could possess certain mechanical properties enabling it to automatically produce a sharp image on the retina despite variations in intraocular pressure. Previously reported simple biomechanical models of the eye did not confirm this hypothesis. Here, we propose a generalised mechanical model of the eyeball in which we include an appropriate limbus ring that mimics the ciliary body and the iris. The Finite Element Method is used to model the eyeball and to test its behaviour. A set of geometrical and material parameters has been determined for the model so that the postulated function of the eye is preserved. Numerical simulations have confirmed the hypothesis. The anatomically justified inclusion of the limbus ring in the proposed model of the eyeball makes it more realistic than those previously reported.
General linear response formula for non integrable systems obeying the Vlasov equation
NASA Astrophysics Data System (ADS)
Patelli, Aurelio; Ruffo, Stefano
2014-11-01
Long-range interacting N-particle systems get trapped into long-living out-of-equilibrium stationary states called quasi-stationary states (QSS). We study here the response to a small external perturbation when such systems are settled into a QSS. In the N → ∞ limit the system is described by the Vlasov equation and QSS are mapped into stable stationary solutions of such equation. We consider this problem in the context of a model that has recently attracted considerable attention, the Hamiltonian mean field (HMF) model. For such a model, stationary inhomogeneous and homogeneous states determine an integrable dynamics in the mean-field effective potential and an action-angle transformation allows one to derive an exact linear response formula. However, such a result would be of limited interest if restricted to the integrable case. In this paper, we show how to derive a general linear response formula which does not use integrability as a requirement. The presence of conservation laws (mass, energy, momentum, etc.) and of further Casimir invariants can be imposed a posteriori. We perform an analysis of the infinite time asymptotics of the response formula for a specific observable, the magnetization in the HMF model, as a result of the application of an external magnetic field, for two stationary stable distributions: the Boltzmann-Gibbs equilibrium distribution and the Fermi-Dirac one. When compared with numerical simulations the predictions of the theory are very good away from the transition energy from inhomogeneous to homogeneous states. Contribution to the Topical Issue "Theory and Applications of the Vlasov Equation", edited by Francesco Pegoraro, Francesco Califano, Giovanni Manfredi and Philip J. Morrison.
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
Predicting estuarine use patterns of juvenile fish with Generalized Linear Models
NASA Astrophysics Data System (ADS)
Vasconcelos, R. P.; Le Pape, O.; Costa, M. J.; Cabral, H. N.
2013-03-01
Statistical models are key for estimating fish distributions based on environmental variables, and validation is generally advocated as indispensable but seldom applied. Generalized Linear Models were applied to distributions of juvenile Solea solea, Solea senegalensis, Platichthys flesus and Dicentrarchus labrax in response to environmental variables throughout Portuguese estuaries. Species-specific Delta models with two sub-models were used: Binomial (presence/absence); Gamma (density when present). Models were fitted and tested on separate data sets to estimate the accuracy and robustness of predictions. Temperature, salinity and mud content in sediment were included in most models for presence/absence; salinity and depth in most models for density (when present). In Binomial models (presence/absence), goodness-of-fit, accuracy and robustness varied concurrently among species, and fair to high accuracy and robustness were attained for all species, in models with poor to high goodness-of-fit. But in Gamma models (density when present), goodness-of-fit was not indicative of accuracy and robustness. Only for Platichthys flesus were Gamma and also coupled Delta models (density) accurate and robust, despite some moderate bias and inconsistency in predicted density. The accuracy and robustness of final density estimations were defined by the accuracy and robustness of the estimations of presence/absence and density (when present) provided by the sub-models. The mismatches between goodness-of-fit, accuracy and robustness of positive density models, as well as the difference in performance of presence/absence and density models demonstrated the importance of validation procedures in the evaluation of the value of habitat suitability models as predictive tools.
NASA Astrophysics Data System (ADS)
Elliott, J.; de Souza, R. S.; Krone-Martins, A.; Cameron, E.; Ishida, E. E. O.; Hilbe, J.
2015-04-01
Machine learning techniques offer a precious tool box for use within astronomy to solve problems involving so-called big data. They provide a means to make accurate predictions about a particular system without prior knowledge of the underlying physical processes of the data. In this article, and the companion papers of this series, we present the set of Generalized Linear Models (GLMs) as a fast alternative method for tackling general astronomical problems, including the ones related to the machine learning paradigm. To demonstrate the applicability of GLMs to inherently positive and continuous physical observables, we explore their use in estimating the photometric redshifts of galaxies from their multi-wavelength photometry. Using the gamma family with a log link function we predict redshifts from the PHoto-z Accuracy Testing simulated catalogue and a subset of the Sloan Digital Sky Survey from Data Release 10. We obtain fits that result in catastrophic outlier rates as low as ∼1% for simulated and ∼2% for real data. Moreover, we can easily obtain such levels of precision within a matter of seconds on a normal desktop computer and with training sets that contain merely thousands of galaxies. Our software is made publicly available as a user-friendly package developed in Python, R and via an interactive web application. This software allows users to apply a set of GLMs to their own photometric catalogues and generates publication quality plots with minimum effort. By facilitating their ease of use to the astronomical community, this paper series aims to make GLMs widely known and to encourage their implementation in future large-scale projects, such as the Large Synoptic Survey Telescope.
MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems
Young, D.M.; Chen, J.Y.
1994-12-31
The authors are concerned with the solution of the linear system (1): Au = b, where A is a real square nonsingular matrix which is large, sparse and non-symmetric. They consider the use of Krylov subspace methods. They first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup {minus}1}B of (1). They also choose an auxiliary matrix Z which is nonsingular. For n = 1,2,{hor_ellipsis} they determine u{sup (n)} such that u{sup (n)} {minus} u{sup (0)}{epsilon}K{sub n}(r{sup (0)},A) where K{sub n}(r{sup (0)},A) is the (Krylov) subspace spanned by the Krylov vectors r{sup (0)}, Ar{sup (0)}, {hor_ellipsis}, A{sup n{minus}1}r{sup 0} and where r{sup (0)} = b{minus}Au{sup (0)}. If ZA is SPD they also require that (u{sup (n)}{minus}{bar u}, ZA(u{sup (n)}{minus}{bar u})) be minimized. If, on the other hand, ZA is not SPD, then they require that the Galerkin condition, (Zr{sup n}, v) = 0, be satisfied for all v{epsilon}K{sub n}(r{sup (0)}, A) where r{sup n} = b{minus}Au{sup (n)}. In this paper the authors consider a generalization of GMRES. This generalized method, which they refer to as `MGMRES`, is very similar to GMRES except that they let Z = A{sup T}Y where Y is a nonsingular matrix which is symmetric by not necessarily SPD.
Shin, Yongyun; Raudenbush, Stephen W
2013-09-28
This article extends single-level missing data methods to efficient estimation of a Q-level nested hierarchical general linear model given ignorable missing data with a general missing pattern at any of the Q levels. The key idea is to reexpress a desired hierarchical model as the joint distribution of all variables including the outcome that are subject to missingness, conditional on all of the covariates that are completely observed and to estimate the joint model under normal theory. The unconstrained joint model, however, identifies extraneous parameters that are not of interest in subsequent analysis of the hierarchical model and that rapidly multiply as the number of levels, the number of variables subject to missingness, and the number of random coefficients grow. Therefore, the joint model may be extremely high dimensional and difficult to estimate well unless constraints are imposed to avoid the proliferation of extraneous covariance components at each level. Furthermore, the over-identified hierarchical model may produce considerably biased inferences. The challenge is to represent the constraints within the framework of the Q-level model in a way that is uniform without regard to Q; in a way that facilitates efficient computation for any number of Q levels; and also in a way that produces unbiased and efficient analysis of the hierarchical model. Our approach yields Q-step recursive estimation and imputation procedures whose qth-step computation involves only level-q data given higher-level computation components. We illustrate the approach with a study of the growth in body mass index analyzing a national sample of elementary school children.
Generalized linear discriminant analysis: a unified framework and efficient model selection.
Ji, Shuiwang; Ye, Jieping
2008-10-01
High-dimensional data are common in many domains, and dimensionality reduction is the key to cope with the curse-of-dimensionality. Linear discriminant analysis (LDA) is a well-known method for supervised dimensionality reduction. When dealing with high-dimensional and low sample size data, classical LDA suffers from the singularity problem. Over the years, many algorithms have been developed to overcome this problem, and they have been applied successfully in various applications. However, there is a lack of a systematic study of the commonalities and differences of these algorithms, as well as their intrinsic relationships. In this paper, a unified framework for generalized LDA is proposed, which elucidates the properties of various algorithms and their relationships. Based on the proposed framework, we show that the matrix computations involved in LDA-based algorithms can be simplified so that the cross-validation procedure for model selection can be performed efficiently. We conduct extensive experiments using a collection of high-dimensional data sets, including text documents, face images, gene expression data, and gene expression pattern images, to evaluate the proposed theories and algorithms.
NASA Astrophysics Data System (ADS)
Vandenberg-Rodes, Alexander; Moftakhari, Hamed R.; AghaKouchak, Amir; Shahbaba, Babak; Sanders, Brett F.; Matthew, Richard A.
2016-11-01
Nuisance flooding corresponds to minor and frequent flood events that have significant socioeconomic and public health impacts on coastal communities. Yearly averaged local mean sea level can be used as proxy to statistically predict the impacts of sea level rise (SLR) on the frequency of nuisance floods (NFs). In this study, we use generalized linear models (GLM) and Gaussian Process (GP) models combined to (i) estimate the frequency of NF associated with the change in mean sea level, and (ii) quantify the associated uncertainties via a novel and statistically robust approach. We calibrate our models to the water level data from 18 tide gauges along the coasts of United States, and after validation, we estimate the frequency of NF associated with the SLR projections in year 2030 (under RCPs 2.6 and 8.5), along with their 90% bands, at each gauge. The historical NF-SLR data are very noisy, and show large changes in variability (heteroscedasticity) with SLR. Prior models in the literature do not properly account for the observed heteroscedasticity, and thus their projected uncertainties are highly suspect. Among the models used in this study, the Negative Binomial Distribution GLM with GP best characterizes the uncertainties associated with NF estimates; on validation data ≈93% of the points fall within the 90% credible limit, showing our approach to be a robust model for uncertainty quantification.
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression.
Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G
2016-09-01
A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.
A generalized linear model for peak calling in ChIP-Seq data.
Xu, Jialin; Zhang, Yu
2012-06-01
Chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) has become a routine for detecting genome-wide protein-DNA interaction. The success of ChIP-Seq data analysis highly depends on the quality of peak calling (i.e., to detect peaks of tag counts at a genomic location and evaluate if the peak corresponds to a real protein-DNA interaction event). The challenges in peak calling include (1) how to combine the forward and the reverse strand tag data to improve the power of peak calling and (2) how to account for the variation of tag data observed across different genomic locations. We introduce a new peak calling method based on the generalized linear model (GLMNB) that utilizes negative binomial distribution to model the tag count data and account for the variation of background tags that may randomly bind to the DNA sequence at varying levels due to local genomic structures and sequence contents. We allow local shifting of peaks observed on the forward and the reverse stands, such that at each potential binding site, a binding profile representing the pattern of a real peak signal is fitted to best explain the observed tag data with maximum likelihood. Our method can also detect multiple peaks within a local region if there are multiple binding sites in the region.
The overlooked potential of Generalized Linear Models in astronomy, I: Binomial regression
NASA Astrophysics Data System (ADS)
de Souza, R. S.; Cameron, E.; Killedar, M.; Hilbe, J.; Vilalta, R.; Maio, U.; Biffi, V.; Ciardi, B.; Riggs, J. D.
2015-09-01
Revealing hidden patterns in astronomical data is often the path to fundamental scientific breakthroughs; meanwhile the complexity of scientific enquiry increases as more subtle relationships are sought. Contemporary data analysis problems often elude the capabilities of classical statistical techniques, suggesting the use of cutting edge statistical methods. In this light, astronomers have overlooked a whole family of statistical techniques for exploratory data analysis and robust regression, the so-called Generalized Linear Models (GLMs). In this paper-the first in a series aimed at illustrating the power of these methods in astronomical applications-we elucidate the potential of a particular class of GLMs for handling binary/binomial data, the so-called logit and probit regression techniques, from both a maximum likelihood and a Bayesian perspective. As a case in point, we present the use of these GLMs to explore the conditions of star formation activity and metal enrichment in primordial minihaloes from cosmological hydro-simulations including detailed chemistry, gas physics, and stellar feedback. We predict that for a dark mini-halo with metallicity ≈ 1.3 × 10-4Z⨀, an increase of 1.2 × 10-2 in the gas molecular fraction, increases the probability of star formation occurrence by a factor of 75%. Finally, we highlight the use of receiver operating characteristic curves as a diagnostic for binary classifiers, and ultimately we use these to demonstrate the competitive predictive performance of GLMs against the popular technique of artificial neural networks.
Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.
2014-01-01
All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
Friese, Daniel H; Ruud, Kenneth
2016-02-07
We present the theory of three-photon circular dichroism (3PCD), a novel non-linear chiroptical property not yet described in the literature. We derive the observable absorption cross section including the orientational average of the necessary seventh-rank tensors and provide origin-independent expressions for 3PCD using either a velocity-gauge treatment of the electric dipole operator or a length-gauge formulation using London atomic orbitals. We present the first numerical results for hydrogen peroxide, 3-methylcyclopentanone (MCP) and 4-helicene, including also a study of the origin dependence and basis set convergence of 3PCD. We find that for the 3PCD-brightest low-lying Rydberg state of hydrogen peroxide, the dichroism is extremely basis set dependent, with basis set convergence not being reached before a sextuple-zeta basis is used, whereas for the MCP and 4-helicene molecules, the basis set dependence is more moderate and at the triple-zeta level the 3PCD contributions are more or less converged irrespective of whether the considered states are Rydberg states or not. The character of the 3PCD-brightest states in MCP is characterized by a fairly large charge-transfer character from the carbonyl group to the ring system. In general, the quadrupole contributions to 3PCD are found to be very small.
Fast inference in generalized linear models via expected log-likelihoods.
Ramirez, Alexandro D; Paninski, Liam
2014-04-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.
Population decoding of motor cortical activity using a generalized linear model with hidden states.
Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam
2010-06-15
Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications.
A generalized harmonic balance method for forced non-linear oscillations: the subharmonic cases
NASA Astrophysics Data System (ADS)
Wu, J. J.
1992-12-01
This paper summarizes and extends results in two previous papers, published in conference proceedings, on a variant of the generalized harmonic balance method (GHB) and its application to obtain subharmonic solutions of forced non-linear oscillation problems. This method was introduced as an alternative to the method of multiple scales, and it essentially consists of two parts. First, the part of the multiple scales method used to reduce the problem to a set of differential equations is used to express the solution as a sum of terms of various harmonics with unknown, time dependent coefficients. Second, the form of solution so obtained is substituted into the original equation and the coefficients of each harmonic are set to zero. Key equations of approximations for a subharmonic case are derived for the cases of both "small" damping and excitations, and "Large" damping and excitations, which are shown to be identical, in the intended order of approximation, to those obtained by Nayfeh using the method of multiple scales. Detailed numerical formulations, including the derivation of the initial conditions, are presented, as well as some numerical results for the frequency-response relations and the time evolution of various harmonic components. Excellent agreement is demonstrated between results by GHB and by integrating the original differential equation directly. The improved efficiency in obtaining numerical solutions using GHB as compared with integrating the original differential equation is demonstrated also. For the case of large damping and excitations and for non-trivial solutions, it is noted that there exists a threshold value of the force beyond which no subharmonic excitations are possible.
A general linear model-based approach for inferring selection to climate
2013-01-01
Background Many efforts have been made to detect signatures of positive selection in the human genome, especially those associated with expansion from Africa and subsequent colonization of all other continents. However, most approaches have not directly probed the relationship between the environment and patterns of variation among humans. We have designed a method to identify regions of the genome under selection based on Mantel tests conducted within a general linear model framework, which we call MAntel-GLM to Infer Clinal Selection (MAGICS). MAGICS explicitly incorporates population-specific and genome-wide patterns of background variation as well as information from environmental values to provide an improved picture of selection and its underlying causes in human populations. Results Our results significantly overlap with those obtained by other published methodologies, but MAGICS has several advantages. These include improvements that: limit false positives by reducing the number of independent tests conducted and by correcting for geographic distance, which we found to be a major contributor to selection signals; yield absolute rather than relative estimates of significance; identify specific geographic regions linked most strongly to particular signals of selection; and detect recent balancing as well as directional selection. Conclusions We find evidence of selection associated with climate (P < 10-5) in 354 genes, and among these observe a highly significant enrichment for directional positive selection. Two of our strongest 'hits’, however, ADRA2A and ADRA2C, implicated in vasoconstriction in response to cold and pain stimuli, show evidence of balancing selection. Our results clearly demonstrate evidence of climate-related signals of directional and balancing selection. PMID:24053227
Protein structure validation by generalized linear model root-mean-square deviation prediction.
Bagaria, Anurag; Jaravine, Victor; Huang, Yuanpeng J; Montelione, Gaetano T; Güntert, Peter
2012-02-01
Large-scale initiatives for obtaining spatial protein structures by experimental or computational means have accentuated the need for the critical assessment of protein structure determination and prediction methods. These include blind test projects such as the critical assessment of protein structure prediction (CASP) and the critical assessment of protein structure determination by nuclear magnetic resonance (CASD-NMR). An important aim is to establish structure validation criteria that can reliably assess the accuracy of a new protein structure. Various quality measures derived from the coordinates have been proposed. A universal structural quality assessment method should combine multiple individual scores in a meaningful way, which is challenging because of their different measurement units. Here, we present a method based on a generalized linear model (GLM) that combines diverse protein structure quality scores into a single quantity with intuitive meaning, namely the predicted coordinate root-mean-square deviation (RMSD) value between the present structure and the (unavailable) "true" structure (GLM-RMSD). For two sets of structural models from the CASD-NMR and CASP projects, this GLM-RMSD value was compared with the actual accuracy given by the RMSD value to the corresponding, experimentally determined reference structure from the Protein Data Bank (PDB). The correlation coefficients between actual (model vs. reference from PDB) and predicted (model vs. "true") heavy-atom RMSDs were 0.69 and 0.76, for the two datasets from CASD-NMR and CASP, respectively, which is considerably higher than those for the individual scores (-0.24 to 0.68). The GLM-RMSD can thus predict the accuracy of protein structures more reliably than individual coordinate-based quality scores.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers
2012-08-01
This paper shows that global linear convergence can be guaranteed under the above assumptions on strong convexity and Lipschitz gradient on one of the...linear convergence can be guaranteed under the above assumptions on strong convexity and Lipschitz gradient on one of the two functions, along with certain...extensive literature on the ADM and its applications , there are very few results on its rate of convergence until the very recent past. Work [13] shows
NASA Astrophysics Data System (ADS)
Irmak, Suat; Mutiibwa, Denis
2010-08-01
The 1-D and single layer combination-based energy balance Penman-Monteith (PM) model has limitations in practical application due to the lack of canopy resistance (rc) data for different vegetation surfaces. rc could be estimated by inversion of the PM model if the actual evapotranspiration (ETa) rate is known, but this approach has its own set of issues. Instead, an empirical method of estimating rc is suggested in this study. We investigated the relationships between primary micrometeorological parameters and rc and developed seven models to estimate rc for a nonstressed maize canopy on an hourly time step using a generalized-linear modeling approach. The most complex rc model uses net radiation (Rn), air temperature (Ta), vapor pressure deficit (VPD), relative humidity (RH), wind speed at 3 m (u3), aerodynamic resistance (ra), leaf area index (LAI), and solar zenith angle (Θ). The simplest model requires Rn, Ta, and RH. We present the practical implementation of all models via experimental validation using scaled up rc data obtained from the dynamic diffusion porometer-measured leaf stomatal resistance through an extensive field campaign in 2006. For further validation, we estimated ETa by solving the PM model using the modeled rc from all seven models and compared the PM ETa estimates with the Bowen ratio energy balance system (BREBS)-measured ETa for an independent data set in 2005. The relationships between hourly rc versus Ta, RH, VPD, Rn, incoming shortwave radiation (Rs), u3, wind direction, LAI, Θ, and ra were presented and discussed. We demonstrated the negative impact of exclusion of LAI when modeling rc, whereas exclusion of ra and Θ did not impact the performance of the rc models. Compared to the calibration results, the validation root mean square difference between observed and modeled rc increased by 5 s m-1 for all rc models developed, ranging from 9.9 s m-1 for the most complex model to 22.8 s m-1 for the simplest model, as compared with the
ERIC Educational Resources Information Center
Rogers, Katherine D.; Young, Alys; Lovell, Karina; Campbell, Malcolm; Scott, Paul R.; Kendal, Sarah
2013-01-01
The present study is aimed to translate 3 widely used clinical assessment measures into British Sign Language (BSL), to pilot the BSL versions, and to establish their validity and reliability. These were the Patient Health Questionnaire (PHQ-9), the Generalized Anxiety Disorder 7-item (GAD-7) scale, and the Work and Social Adjustment Scale (WSAS).…
NASA Astrophysics Data System (ADS)
Carniti, P.; Cassina, L.; Gotti, C.; Maino, M.; Pessina, G.
2016-07-01
In this work we present ALDO, an adjustable low drop-out linear regulator designed in AMS 0.35 μm CMOS technology. It is specifically tailored for use in the upgraded LHCb RICH detector in order to improve the power supply noise for the front end readout chip (CLARO). ALDO is designed with radiation-tolerant solutions such as an all-MOS band-gap voltage reference and layout techniques aiming to make it able to operate in harsh environments like High Energy Physics accelerators. It is capable of driving up to 200 mA while keeping an adequate power supply filtering capability in a very wide frequency range from 10 Hz up to 100 MHz. This property allows us to suppress the noise and high frequency spikes that could be generated by a DC/DC regulator, for example. ALDO also shows a very low noise of 11.6 μV RMS in the same frequency range. Its output is protected with over-current and short detection circuits for a safe integration in tightly packed environments. Design solutions and measurements of the first prototype are presented.
Evidence for the conjecture that sampling generalized cat states with linear optics is hard
NASA Astrophysics Data System (ADS)
Rohde, Peter P.; Motes, Keith R.; Knott, Paul A.; Fitzsimons, Joseph; Munro, William J.; Dowling, Jonathan P.
2015-01-01
Boson sampling has been presented as a simplified model for linear optical quantum computing. In the boson-sampling model, Fock states are passed through a linear optics network and sampled via number-resolved photodetection. It has been shown that this sampling problem likely cannot be efficiently classically simulated. This raises the question as to whether there are other quantum states of light for which the equivalent sampling problem is also computationally hard. We present evidence, without using a full complexity proof, that a very broad class of quantum states of light—arbitrary superpositions of two or more coherent states—when evolved via passive linear optics and sampled with number-resolved photodetection, likely implements a classically hard sampling problem.
NASA Astrophysics Data System (ADS)
Shirokov, M. E.
2013-11-01
The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.
Shirokov, M. E.
2013-11-15
The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1987-01-01
The present paper describes the development of a new hybrid computational approach for applicability for nonlinear/linear thermal structural analysis. The proposed transfinite element approach is a hybrid scheme as it combines the modeling versatility of contemporary finite elements in conjunction with transform methods and the classical Bubnov-Galerkin schemes. Applicability of the proposed formulations for nonlinear analysis is also developed. Several test cases are presented to include nonlinear/linear unified thermal-stress and thermal-stress wave propagations. Comparative results validate the fundamental capablities of the proposed hybrid transfinite element methodology.
Huppert, Theodore J.
2016-01-01
Abstract. Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts. PMID:26989756
Recent advances toward a general purpose linear-scaling quantum force field.
Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M
2014-09-16
Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to
ERIC Educational Resources Information Center
Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer
2013-01-01
Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…
1980-11-01
generalized nodel described by Eykhoff [1, 2], Astrom and Eykhoff [3], and on pages 209-220 of Eykhoff [4]. The origin of the general- ized model can be...aspects of process-parameter estimation," IEEE Trans. Auto. Control, October 1963, pp. 347-357. 3. K. J. Astrom and P. Eykhoff, "System
de Dieu Tapsoba, Jean; Lee, Shen-Ming; Wang, Ching-Yun
2013-01-01
Data collected in many epidemiological or clinical research studies are often contaminated with measurement errors that may be of classical or Berkson error type. The measurement error may also be a combination of both classical and Berkson errors and failure to account for both errors could lead to unreliable inference in many situations. We consider regression analysis in generalized linear models when some covariates are prone to a mixture of Berkson and classical errors and calibration data are available only for some subjects in a subsample. We propose an expected estimating equation approach to accommodate both errors in generalized linear regression analyses. The proposed method can consistently estimate the classical and Berkson error variances based on the available data, without knowing the mixture percentage. Its finite-sample performance is investigated numerically. Our method is illustrated by an application to real data from an HIV vaccine study. PMID:24009099
Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.
2001-07-01
This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper.
A general algorithm for control problems with variable parameters and quasi-linear models
NASA Astrophysics Data System (ADS)
Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.
2015-12-01
This paper presents an algorithm that is able to solve optimal control problems in which the modelling of the system contains variable parameters, with the added complication that, in certain cases, these parameters can lead to control problems governed by quasi-linear equations. Combining the techniques of Pontryagin's Maximum Principle and the shooting method, an algorithm has been developed that is not affected by the values of the parameters, being able to solve conventional problems as well as cases in which the optimal solution is shown to be bang-bang with singular arcs.
Michaelides, Angelos; Liu, Z-P; Zhang, C J; Alavi, Ali; King, David A; Hu, P
2003-04-02
The activation energy to reaction is a key quantity that controls catalytic activity. Having used ab inito calculations to determine an extensive and broad ranging set of activation energies and enthalpy changes for surface-catalyzed reactions, we show that linear relationships exist between dissociation activation energies and enthalpy changes. Known in the literature as empirical Brønsted-Evans-Polanyi (BEP) relationships, we identify and discuss the physical origin of their presence in heterogeneous catalysis. The key implication is that merely from knowledge of adsorption energies the barriers to catalytic elementary reaction steps can be estimated.
A substructure coupling procedure applicable to general linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Howsman, T. G.; Craig, R. R., Jr.
1984-01-01
A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the nonself-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order mode for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.
Marín-Sanguino, Alberto; Torres, Néstor V
2003-08-01
A new method is proposed for the optimization of biochemical systems. The method, based on the separation of the stoichiometric and kinetic aspects of the system, follows the general approach used in the previously presented indirect optimization method (IOM) developed within biochemical systems theory. It is called GMA-IOM because it makes use of the generalized mass action (GMA) as the model system representation form. The GMA representation avoids flux aggregation and thus prevents possible stoichiometric errors. The optimization of a system is used to illustrate and compare the features, advantages and shortcomings of both versions of the IOM method as a general strategy for designing improved microbial strains of biotechnological interest. Special attention has been paid to practical problems for the actual implementation of the new proposed strategy, such as the total protein content of the engineered strain or the deviation from the original steady state and its influence on cell viability.
General theory of spherically symmetric boundary-value problems of the linear transport theory.
NASA Technical Reports Server (NTRS)
Kanal, M.
1972-01-01
A general theory of spherically symmetric boundary-value problems of the one-speed neutron transport theory is presented. The formulation is also applicable to the 'gray' problems of radiative transfer. The Green's function for the purely absorbing medium is utilized in obtaining the normal mode expansion of the angular densities for both interior and exterior problems. As the integral equations for unknown coefficients are regular, a general class of reduction operators is introduced to reduce such regular integral equations to singular ones with a Cauchy-type kernel. Such operators then permit one to solve the singular integral equations by the standard techniques due to Muskhelishvili. We discuss several spherically symmetric problems. However, the treatment is kept sufficiently general to deal with problems lacking azimuthal symmetry. In particular the procedure seems to work for regions whose boundary coincides with one of the coordinate surfaces for which the Helmholtz equation is separable.
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.; Schultz, Marc R.
2012-01-01
A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.
Robust conic generalized partial linear models using RCMARS method - A robustification of CGPLM
NASA Astrophysics Data System (ADS)
Özmen, Ayşe; Weber, Gerhard Wilhelm
2012-11-01
GPLM is a combination of two different regression models each of which is used to apply on different parts of the data set. It is also adequate to high dimensional, non-normal and nonlinear data sets having the flexibility to reflect all anomalies effectively. In our previous study, Conic GPLM (CGPLM) was introduced using CMARS and Logistic Regression. According to a comparison with CMARS, CGPLM gives better results. In this study, we include the existence of uncertainty in the future scenarios into CMARS and linear/logit regression part in CGPLM and robustify it with robust optimization which is dealt with data uncertainty. Moreover, we apply RCGPLM on a small data set as a numerical experience from the financial sector.
Iterative solution of general sparse linear systems on clusters of workstations
Lo, Gen-Ching; Saad, Y.
1996-12-31
Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.
Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob
2007-01-01
For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.
Linear stability of plane Poiseuille flow over a generalized Stokes layer
NASA Astrophysics Data System (ADS)
Quadrio, Maurizio; Martinelli, Fulvio; Schmid, Peter J.
2011-12-01
Linear stability of plane Poiseuille flow subject to spanwise velocity forcing applied at the wall is studied. The forcing is stationary and sinusoidally distributed along the streamwise direction. The long-term aim of the study is to explore a possible relationship between the modification induced by the wall forcing to the stability characteristic of the unforced Poiseuille flow and the signifcant capabilities demonstrated by the same forcing in reducing turbulent friction drag. We present in this paper the statement of the mathematical problem, which is considerably more complex that the classic Orr-Sommerfeld-Squire approach, owing to the streamwise-varying boundary condition. We also report some preliminary results which, although not yet conclusive, describe the effects of the wall forcing on modal and non-modal characteristics of the flow stability.
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
2005-01-01
A three-dimensional numerical solver based on finite-difference solution of three-dimensional elastodynamic equations in generalized curvilinear coordinates has been developed and used to generate data such as radial and tangential stresses over various gear component geometries under rotation. The geometries considered are an annulus, a thin annular disk, and a thin solid disk. The solution is based on first principles and does not involve lumped parameter or distributed parameter systems approach. The elastodynamic equations in the velocity-stress formulation that are considered here have been used in the solution of problems of geophysics where non-rotating Cartesian grids are considered. For arbitrary geometries, these equations along with the appropriate boundary conditions have been cast in generalized curvilinear coordinates in the present study.
Chuang, Chun-Fu; Sun, Yeong-Jeu; Wang, Wen-June
2012-12-01
In this study, exponential finite-time synchronization for generalized Lorenz chaotic systems is investigated. The significant contribution of this paper is that master-slave synchronization is achieved within a pre-specified convergence time and with a simple linear control. The designed linear control consists of two parts: one achieves exponential synchronization, and the other realizes finite-time synchronization within a guaranteed convergence time. Furthermore, the control gain depends on the parameters of the exponential convergence rate, the finite-time convergence rate, the bound of the initial states of the master system, and the system parameter. In addition, the proposed approach can be directly and efficiently applied to secure communication. Finally, four numerical examples are provided to demonstrate the feasibility and correctness of the obtained results.
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.
Classical and Generalized Solutions of Time-Dependent Linear Differential Algebraic Equations
1993-10-15
matrix pencils, [G59]. The book [GrM86] also contains a treatment of the general system (1.1) utilizing a condition of "transferabilitv’" which...C(t) and N(t) are analytic functions of t and N(t) is nilpotent upper (or lower) triangular for all t E J. From the structure of N(t), it follows that...the operator Y(t)l7 n is nilpotent , so that (1.2b) has the unique solution z = E (-1)k(N(t)-)kg, and (1.2a) is k=1 it an explicit ODE. But no
Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L.
2014-01-01
Background Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. Methods A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic “generalized linear mixed models”,“hierarchical generalized linear models”, “multilevel generalized linear model” and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. Results A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. Conclusions During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of
Chen, Zhe; Putrino, David F; Ba, Demba E; Ghosh, Soumya; Barbieri, Riccardo; Brown, Emery N
2009-01-01
Identification of multiple simultaneously recorded neural spike train recordings is an important task in understanding neuronal dependency, functional connectivity, and temporal causality in neural systems. An assessment of the functional connectivity in a group of ensemble cells was performed using a regularized point process generalized linear model (GLM) that incorporates temporal smoothness or contiguity of the solution. An efficient convex optimization algorithm was then developed for the regularized solution. The point process model was applied to an ensemble of neurons recorded from the cat motor cortex during a skilled reaching task. The implications of this analysis to the coding of skilled movement in primary motor cortex is discussed.
NASA Astrophysics Data System (ADS)
Lins, R. M.; Ferreira, M. D. C.; Proença, S. P. B.; Duarte, C. A.
2015-12-01
In this study, a recovery-based a-posteriori error estimator originally proposed for the Corrected XFEM is investigated in the framework of the stable generalized FEM (SGFEM). Both Heaviside and branch functions are adopted to enrich the approximations in the SGFEM. Some necessary adjustments to adapt the expressions defining the enhanced stresses in the original error estimator are discussed in the SGFEM framework. Relevant aspects such as effectivity indexes, error distribution, convergence rates and accuracy of the recovered stresses are used in order to highlight the main findings and the effectiveness of the error estimator. Two benchmark problems of the 2-D fracture mechanics are selected to assess the robustness of the error estimator hereby investigated. The main findings of this investigation are: the SGFEM shows higher accuracy than G/XFEM and a reduced sensitivity to blending element issues. The error estimator can accurately capture these features of both methods.
Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia
2015-01-01
We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801
Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia
We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case.
Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum
Wilson, Emma D.; Assaf, Tareq; Pearson, Martin J.; Rossiter, Jonathan M.; Dean, Paul; Anderson, Sean R.; Porrill, John
2015-01-01
The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks. PMID:26257638
Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum.
Wilson, Emma D; Assaf, Tareq; Pearson, Martin J; Rossiter, Jonathan M; Dean, Paul; Anderson, Sean R; Porrill, John
2015-01-01
The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks.
NASA Astrophysics Data System (ADS)
Altaleb, Anas; Saeed, Muhammad Sarwar; Hussain, Iqtadar; Aslam, Muhammad
2017-03-01
The aim of this work is to synthesize 8*8 substitution boxes (S-boxes) for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28)) on Galois field GF(28). In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
The generalized cross-validation method applied to geophysical linear traveltime tomography
NASA Astrophysics Data System (ADS)
Bassrei, A.; Oliveira, N. P.
2009-12-01
The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.
Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J
2014-12-10
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk.
NASA Astrophysics Data System (ADS)
Lu, Y.; Chatterjee, S.
2014-11-01
Exponential family statistical distributions, including the well-known normal, binomial, Poisson, and exponential distributions, are overwhelmingly used in data analysis. In the presence of covariates, an exponential family distributional assumption for the response random variables results in a generalized linear model. However, it is rarely ensured that the parameters of the assumed distributions are stable through the entire duration of the data collection process. A failure of stability leads to nonsmoothness and nonlinearity in the physical processes that result in the data. In this paper, we propose testing for stability of parameters of exponential family distributions and generalized linear models. A rejection of the hypothesis of stable parameters leads to change detection. We derive the related likelihood ratio test statistic. We compare the performance of this test statistic to the popular normal distributional assumption dependent cumulative sum (Gaussian CUSUM) statistic in change detection problems. We study Atlantic tropical storms using the techniques developed here, so to understand whether the nature of these tropical storms has remained stable over the last few decades.
NASA Astrophysics Data System (ADS)
Koss, Hans; Rance, Mark; Palmer, Arthur G.
2017-01-01
Exploration of dynamic processes in proteins and nucleic acids by spin-locking NMR experiments has been facilitated by the development of theoretical expressions for the R1ρ relaxation rate constant covering a variety of kinetic situations. Herein, we present a generalized approximation to the chemical exchange, Rex, component of R1ρ for arbitrary kinetic schemes, assuming the presence of a dominant major site population, derived from the negative reciprocal trace of the inverse Bloch-McConnell evolution matrix. This approximation is equivalent to first-order truncation of the characteristic polynomial derived from the Bloch-McConnell evolution matrix. For three- and four-site chemical exchange, the first-order approximations are sufficient to distinguish different kinetic schemes. We also introduce an approach to calculate R1ρ for linear N-site schemes, using the matrix determinant lemma to reduce the corresponding 3N × 3N Bloch-McConnell evolution matrix to a 3 × 3 matrix. The first- and second order-expansions of the determinant of this 3 × 3 matrix are closely related to previously derived equations for two-site exchange. The second-order approximations for linear N-site schemes can be used to obtain more accurate approximations for non-linear N-site schemes, such as triangular three-site or star four-site topologies. The expressions presented herein provide powerful means for the estimation of Rex contributions for both low (CEST-limit) and high (R1ρ-limit) radiofrequency field strengths, provided that the population of one state is dominant. The general nature of the new expressions allows for consideration of complex kinetic situations in the analysis of NMR spin relaxation data.
Cho, C Y; Cheng, H P; Chang, Y C; Tang, C Y; Chen, Y F
2015-03-23
An energy adjustable passively Q-switched laser is demonstrated with a composite Nd:YAG/Cr⁴⁺:YAG crystal by applying a wedged interface inside the crystal. The theoretical model of the monolithic laser resonator is explored to show the energy adjustable feature with different initial transmissions of the saturable absorber at the horizontal axis. By adjusting the pump beam location across the Nd:YAG crystal, the output pulse energy can be flexibly changed from 10.9 μJ to 17.6 μJ while maintaining the same output efficiency. The polarization state of the laser output is found to be along with the polarization of the C-mount pump diode. Finally, the behavior of the multi-transverse-mode oscillation is also discussed for eliminating the instability of the pulse train.
NASA Astrophysics Data System (ADS)
Bauer, H.-S.; Wulfmeyer, V.; Bengtsson, L.
2008-04-01
In this work, a strong cyclone event is simulated by the general circulation model (GCM) ECHAM4 for studying the representation of weather systems in a climate model. The system developed along the East Coast of the U.S.A. between the 12th and 14th of March 1993. The GCM simulation was started from climatological conditions and was continuously forced to the analyzed state by a thermodynamical adjustment based on the Newtonian relaxation technique (nudging). Relaxation terms for vorticity, divergence, temperature, and the logarithm of surface pressure were added at each model level and time step. The necessary forcing files were calculated from the ECMWF re-analysis (ERA15). No nudging terms were added for the components of the water cycle. Using this forcing, the model was able to reproduce the synoptic-scale features and its temporal development realistically after a spin-up period. This is true even for quantities that are not adjusted to the analysis (e.g., humidity). Detailed comparisons of the model simulations with available observations and the forcing ERA15 were performed for the cyclone case. Systematic errors were detected in the simulation of the thermodynamic state of the atmosphere, which can be traced back to deficiencies in model parametrizations. Differences in the representation of the surface fluxes lead to systematic deviations in near-surface temperature and wind fields. The general situation is very similar in both model representations. Errors were detected in the simulation of the convective boundary layer behind the cold front. The observed strong convective activity is missed both by the adjusted ECHAM4 simulation and ERA15. This is most likely caused by weaknesses in the cloud and convection schemes or by a too strong downdraft compensating the frontal lifting and suppressing the vertical transport of moisture from the boundary layer to higher levels. This work demonstrates for the investigated case the value of simulating single weather
NASA Astrophysics Data System (ADS)
Edwards, C. L.; Edwards, M. L.
2009-05-01
MEMS micro-mirror technology offers the opportunity to replace larger optical actuators with smaller, faster ones for lidar, network switching, and other beam steering applications. Recent developments in modeling and simulation of MEMS two-axis (tip-tilt) mirrors have resulted in closed-form solutions that are expressed in terms of physical, electrical and environmental parameters related to the MEMS device. The closed-form analytical expressions enable dynamic time-domain simulations without excessive computational overhead and are referred to as the Micro-mirror Pointing Model (MPM). Additionally, these first-principle models have been experimentally validated with in-situ static, dynamic, and stochastic measurements illustrating their reliability. These models have assumed that the mirror has a rectangular shape. Because the corners can limit the dynamic operation of a rectangular mirror, it is desirable to shape the mirror, e.g., mitering the corners. Presented in this paper is the formulation of a generalized electrostatic micromirror (GEM) model with an arbitrary convex piecewise linear shape that is readily implemented in MATLAB and SIMULINK for steady-state and dynamic simulations. Additionally, such a model permits an arbitrary shaped mirror to be approximated as a series of linearly tapered segments. Previously, "effective area" arguments were used to model a non-rectangular shaped mirror with an equivalent rectangular one. The GEM model shows the limitations of this approach and provides a pre-fabrication tool for designing mirror shapes.
Lazar, Ann A; Zerbe, Gary O
2011-12-01
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA), the Johnson-Neyman procedure can be used to determine the significance region; for the hierarchical linear model (HLM), the Miyazaki and Maier (M-M) procedure has been suggested. However, neither procedure can assume nonnormally distributed data. Furthermore, the M-M procedure produces biased (downward) results because it uses the Wald test, does not control the inflated Type I error rate due to multiple testing, and requires implementing multiple software packages to determine the significance region. In this article, we address these limitations by proposing solutions for determining the significance region suitable for generalized linear (mixed) model (GLM or GLMM). These proposed solutions incorporate test statistics that resolve the biased results, control the Type I error rate using Scheffé's method, and uses a single statistical software package to determine the significance region.
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Chen, Vivian Yi-Ju; Yang, Tse-Chuan
2012-08-01
An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above.
NASA Astrophysics Data System (ADS)
Wu, Jingjing; Liu, Wei; Liu, Zhengjun; Liu, Shutian
2015-03-01
We introduce a chosen-plaintext attack scheme on general optical cryptosystems that use linear canonical transform and phase encoding based on correlated imaging. The plaintexts are chosen as Gaussian random real number matrixes, and the corresponding ciphertexts are regarded as prior knowledge of the proposed attack method. To establish the reconstruct of the secret plaintext, correlated imaging is employed using the known resources. Differing from the reported attack methods, there is no need to decipher the distribution of the decryption key. The original secret image can be directly recovered by the attack in the absence of decryption key. In addition, the improved cryptosystems combined with pixel scrambling operations are also vulnerable to the proposed attack method. Necessary mathematical derivations and numerical simulations are carried out to demonstrate the validity of the proposed attack scheme.
Blumberg, Leonid M; Desmet, Gert
2015-09-25
The separation performance metrics defined in Part 1 of this series are applied to the evaluation of general separation performance of linear solvent strength (LSS) gradient LC. Among the evaluated metrics was the peak capacity of an arbitrary segment of a chromatogram. Also evaluated were the peak width, the separability of two solutes, the utilization of separability, and the speed of analysis-all at an arbitrary point of a chromatogram. The means are provided to express all these metrics as functions of an arbitrary time during LC analysis, as functions of an arbitrary outlet solvent strength changing during the analysis, as functions of parameters of the solutes eluting during the analysis, and as functions of several other factors. The separation performance of gradient LC is compared with the separation performance of temperature-programmed GC evaluated in Part 2.
Mullah, Muhammad Abu Shadeque; Benedetti, Andrea
2016-11-01
Besides being mainly used for analyzing clustered or longitudinal data, generalized linear mixed models can also be used for smoothing via restricting changes in the fit at the knots in regression splines. The resulting models are usually called semiparametric mixed models (SPMMs). We investigate the effect of smoothing using SPMMs on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data. Through simulations, we compare the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function. Simulation results suggest that, in general, the SPMMs recover the true curves very well and yield reasonable estimates of the correlation and variance parameters. However, for binary outcomes, SPMMs produce biased estimates of the variance parameters for high serially correlated data. We apply these methods to a dataset investigating the association between CD4 cell count and time since seroconversion for HIV infected men enrolled in the Multicenter AIDS Cohort Study.
Rey deCastro, B; Neuberg, Donna
2007-05-30
Biological assays often utilize experimental designs where observations are replicated at multiple levels, and where each level represents a separate component of the assay's overall variance. Statistical analysis of such data usually ignores these design effects, whereas more sophisticated methods would improve the statistical power of assays. This report evaluates the statistical performance of an in vitro MCF-7 cell proliferation assay (E-SCREEN) by identifying the optimal generalized linear mixed model (GLMM) that accurately represents the assay's experimental design and variance components. Our statistical assessment found that 17beta-oestradiol cell culture assay data were best modelled with a GLMM configured with a reciprocal link function, a gamma error distribution, and three sources of design variation: plate-to-plate; well-to-well, and the interaction between plate-to-plate variation and dose. The gamma-distributed random error of the assay was estimated to have a coefficient of variation (COV) = 3.2 per cent, and a variance component score test described by X. Lin found that each of the three variance components were statistically significant. The optimal GLMM also confirmed the estrogenicity of five weakly oestrogenic polychlorinated biphenyls (PCBs 17, 49, 66, 74, and 128). Based on information criteria, the optimal gamma GLMM consistently out-performed equivalent naive normal and log-normal linear models, both with and without random effects terms. Because the gamma GLMM was by far the best model on conceptual and empirical grounds, and requires only trivially more effort to use, we encourage its use and suggest that naive models be avoided when possible.
Hughes, Vanessa K; Langlois, Neil E I
2010-12-01
Bruises can have medicolegal significance such that the age of a bruise may be an important issue. This study sought to determine if colorimetry or reflectance spectrophotometry could be employed to objectively estimate the age of bruises. Based on a previously described method, reflectance spectrophotometric scans were obtained from bruises using a Cary 100 Bio spectrophotometer fitted with a fibre-optic reflectance probe. Measurements were taken from the bruise and a control area. Software was used to calculate the first derivative at 490 and 480 nm; the proportion of oxygenated hemoglobin was calculated using an isobestic point method and a software application converted the scan data into colorimetry data. In addition, data on factors that might be associated with the determination of the age of a bruise: subject age, subject sex, degree of trauma, bruise size, skin color, body build, and depth of bruise were recorded. From 147 subjects, 233 reflectance spectrophotometry scans were obtained for analysis. The age of the bruises ranged from 0.5 to 231.5 h. A General Linear Model analysis method was used. This revealed that colorimetric measurement of the yellowness of a bruise accounted for 13% of the bruise age. By incorporation of the other recorded data (as above), yellowness could predict up to 32% of the age of a bruise-implying that 68% of the variation was dependent on other factors. However, critical appraisal of the model revealed that the colorimetry method of determining the age of a bruise was affected by skin tone and required a measure of the proportion of oxygenated hemoglobin, which is obtained by spectrophotometric methods. Using spectrophotometry, the first derivative at 490 nm alone accounted for 18% of the bruise age estimate. When additional factors (subject sex, bruise depth and oxygenation of hemoglobin) were included in the General Linear Model this increased to 31%-implying that 69% of the variation was dependent on other factors. This
NASA Astrophysics Data System (ADS)
Asong, Zilefac E.; Khaliq, M. N.; Wheater, H. S.
2016-11-01
Based on the Generalized Linear Model (GLM) framework, a multisite stochastic modelling approach is developed using daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. Temperature is modeled using a two-stage normal-heteroscedastic model by fitting mean and variance components separately. Likewise, precipitation occurrence and conditional precipitation intensity processes are modeled separately. The relationship between precipitation and temperature is accounted for by using transformations of precipitation as covariates to predict temperature fields. Large scale atmospheric covariates from the National Center for Environmental Prediction Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate these models for the 1971-2000 period. Validation of the developed models is performed on both pre- and post-calibration period data. Results of the study indicate that the developed models are able to capture spatiotemporal characteristics of observed precipitation and temperature fields, such as inter-site and inter-variable correlation structure, and systematic regional variations present in observed sequences. A number of simulated weather statistics ranging from seasonal means to characteristics of temperature and precipitation extremes and some of the commonly used climate indices are also found to be in close agreement with those derived from observed data. This GLM-based modelling approach will be developed further for multisite statistical downscaling of Global Climate Model outputs to explore climate variability and change in this region of Canada.
Xiao, Xun; Geyer, Veikko F.; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F.
2016-01-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582
Planeta, Josef; Karásek, Pavel; Hohnová, Barbora; Sťavíková, Lenka; Roth, Michal
2012-08-10
Biphasic solvent systems composed of an ionic liquid (IL) and supercritical carbon dioxide (scCO(2)) have become frequented in synthesis, extractions and electrochemistry. In the design of related applications, information on interphase partitioning of the target organics is essential, and the infinite-dilution partition coefficients of the organic solutes in IL-scCO(2) systems can conveniently be obtained by supercritical fluid chromatography. The data base of experimental partition coefficients obtained previously in this laboratory has been employed to test a generalized predictive model for the solute partition coefficients. The model is an amended version of that described before by Hiraga et al. (J. Supercrit. Fluids, in press). Because of difficulty of the problem to be modeled, the model involves several different concepts - linear solvation energy relationships, density-dependent solvent power of scCO(2), regular solution theory, and the Flory-Huggins theory of athermal solutions. The model shows a moderate success in correlating the infinite-dilution solute partition coefficients (K-factors) in individual IL-scCO(2) systems at varying temperature and pressure. However, larger K-factor data sets involving multiple IL-scCO(2) systems appear to be beyond reach of the model, especially when the ILs involved pertain to different cation classes.
Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L
2012-12-01
The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).
Vock, David M.; Davidian, Marie; Tsiatis, Anastasios A.
2014-01-01
Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time. PMID:24688453
Mendes, T M; Guimarães-Okamoto, P T C; Machado-de-Avila, R A; Oliveira, D; Melo, M M; Lobato, Z I; Kalapothakis, E; Chávez-Olórtegui, C
2015-06-01
This communication describes the general characteristics of the venom from the Brazilian scorpion Tityus fasciolatus, which is an endemic species found in the central Brazil (States of Goiás and Minas Gerais), being responsible for sting accidents in this area. The soluble venom obtained from this scorpion is toxic to mice being the LD50 is 2.984 mg/kg (subcutaneally). SDS-PAGE of the soluble venom resulted in 10 fractions ranged in size from 6 to 10-80 kDa. Sheep were employed for anti-T. fasciolatus venom serum production. Western blotting analysis showed that most of these venom proteins are immunogenic. T. fasciolatus anti-venom revealed consistent cross-reactivity with venom antigens from Tityus serrulatus. Using known primers for T. serrulatus toxins, we have identified three toxins sequences from T. fasciolatus venom. Linear epitopes of these toxins were localized and fifty-five overlapping pentadecapeptides covering complete amino acid sequence of the three toxins were synthesized in cellulose membrane (spot-synthesis technique). The epitopes were located on the 3D structures and some important residues for structure/function were identified.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.
Prates, Marcos O; Aseltine, Robert H; Dey, Dipak K; Yan, Jun
2013-11-01
Unhealthy alcohol use is one of the leading causes of morbidity and mortality in the United States. Brief interventions with high-risk drinkers during an emergency department (ED) visit are of great interest due to their possible efficacy and low cost. In a collaborative study with patients recruited at 14 academic ED across the United States, we examined the self-reported number of drinks per week by each patient following the exposure to a brief intervention. Count data with overdispersion have been mostly analyzed with generalized linear mixed models (GLMMs), of which only a limited number of link functions are available. Different choices of link function provide different fit and predictive power for a particular dataset. We propose a class of link functions from an alternative way to incorporate random effects in a GLMM, which encompasses many existing link functions as special cases. The methodology is naturally implemented in a Bayesian framework, with competing links selected with Bayesian model selection criteria such as the conditional predictive ordinate (CPO). In application to the ED intervention study, all models suggest that the intervention was effective in reducing the number of drinks, but some new models are found to significantly outperform the traditional model as measured by CPO. The validity of CPO in link selection is confirmed in a simulation study that shared the same characteristics as the count data from high-risk drinkers. The dataset and the source code for the best fitting model are available in Supporting Information.
NASA Astrophysics Data System (ADS)
de Souza, R. S.; Hilbe, J. M.; Buelens, B.; Riggs, J. D.; Cameron, E.; Ishida, E. E. O.; Chies-Santos, A. L.; Killedar, M.
2015-10-01
In this paper, the third in a series illustrating the power of generalized linear models (GLMs) for the astronomical community, we elucidate the potential of the class of GLMs which handles count data. The size of a galaxy's globular cluster (GC) population (NGC) is a prolonged puzzle in the astronomical literature. It falls in the category of count data analysis, yet it is usually modelled as if it were a continuous response variable. We have developed a Bayesian negative binomial regression model to study the connection between NGC and the following galaxy properties: central black hole mass, dynamical bulge mass, bulge velocity dispersion and absolute visual magnitude. The methodology introduced herein naturally accounts for heteroscedasticity, intrinsic scatter, errors in measurements in both axes (either discrete or continuous) and allows modelling the population of GCs on their natural scale as a non-negative integer variable. Prediction intervals of 99 per cent around the trend for expected NGC comfortably envelope the data, notably including the Milky Way, which has hitherto been considered a problematic outlier. Finally, we demonstrate how random intercept models can incorporate information of each particular galaxy morphological type. Bayesian variable selection methodology allows for automatically identifying galaxy types with different productions of GCs, suggesting that on average S0 galaxies have a GC population 35 per cent smaller than other types with similar brightness.
Monda, D.P.; Galat, D.L.; Finger, S.E.; Kaiser, M.S.
1995-01-01
Toxicity of un-ionized ammonia (NH3-N) to the midge, Chironomus riparius was compared, using laboratory culture (well) water and sewage effluent (≈0.4 mg/L NH3-N) in two 96-h, static-renewal toxicity experiments. A generalized linear model was used for data analysis. For the first and second experiments, respectively, LC50 values were 9.4 mg/L (Test 1A) and 6.6 mg/L (Test 2A) for ammonia in well water, and 7.8 mg/L (Test 1B) and 4.1 mg/L (Test 2B) for ammonia in sewage effluent. Slopes of dose-response curves for Tests 1A and 2A were equal, but mortality occurred at lower NH3-N concentrations in Test 2A (unequal intercepts). Response ofC. riparius to NH3 in effluent was not consistent; dose-response curves for tests 1B and 2B differed in slope and intercept. Nevertheless, C. riparius was more sensitive to ammonia in effluent than in well water in both experiments, indicating a synergistic effect of ammonia in sewage effluent. These results demonstrate the advantages of analyzing the organisms entire range of response, as opposed to generating LC50 values, which represent only one point on the dose-response curve.
Gonçalves, Nuno R; Whelan, Robert; Foxe, John J; Lalor, Edmund C
2014-08-15
Noninvasive investigation of human sensory processing with high temporal resolution typically involves repeatedly presenting discrete stimuli and extracting an average event-related response from scalp recorded neuroelectric or neuromagnetic signals. While this approach is and has been extremely useful, it suffers from two drawbacks: a lack of naturalness in terms of the stimulus and a lack of precision in terms of the cortical response generators. Here we show that a linear modeling approach that exploits functional specialization in sensory systems can be used to rapidly obtain spatiotemporally precise responses to complex sensory stimuli using electroencephalography (EEG). We demonstrate the method by example through the controlled modulation of the contrast and coherent motion of visual stimuli. Regressing the data against these modulation signals produces spatially focal, highly temporally resolved response measures that are suggestive of specific activation of visual areas V1 and V6, respectively, based on their onset latency, their topographic distribution and the estimated location of their sources. We discuss our approach by comparing it with fMRI/MRI informed source analysis methods and, in doing so, we provide novel information on the timing of coherent motion processing in human V6. Generalizing such an approach has the potential to facilitate the rapid, inexpensive spatiotemporal localization of higher perceptual functions in behaving humans.
NASA Astrophysics Data System (ADS)
Asong, Z. E.; Khaliq, M. N.; Wheater, H. S.
2016-08-01
In this study, a multisite multivariate statistical downscaling approach based on the Generalized Linear Model (GLM) framework is developed to downscale daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. First, large scale atmospheric covariates from the National Center for Environmental Prediction (NCEP) Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate GLMs for the 1971-2000 period. Then the calibrated models are used to generate daily sequences of precipitation and temperature for the 1962-2005 historical (conditioned on NCEP predictors), and future period (2006-2100) using outputs from five CMIP5 (Coupled Model Intercomparison Project Phase-5) Earth System Models corresponding to Representative Concentration Pathway (RCP): RCP2.6, RCP4.5, and RCP8.5 scenarios. The results indicate that the fitted GLMs are able to capture spatiotemporal characteristics of observed precipitation and temperature fields. According to the downscaled future climate, mean precipitation is projected to increase in summer and decrease in winter while minimum temperature is expected to warm faster than the maximum temperature. Climate extremes are projected to intensify with increased radiative forcing.
Wang, Pengwei; Wang, Zhishun; He, Lianghua
2012-03-30
Functional Magnetic Resonance Imaging (fMRI), measuring Blood Oxygen Level-Dependent (BOLD), is a widely used tool to reveal spatiotemporal pattern of neural activity in human brain. Standard analysis of fMRI data relies on a general linear model and the model is constructed by convolving the task stimuli with a hypothesized hemodynamic response function (HRF). To capture possible phase shifts in the observed BOLD response, the informed basis functions including canonical HRF and its temporal derivative, have been proposed to extend the hypothesized hemodynamic response in order to obtain a good fitting model. Different t contrasts are constructed from the estimated model parameters for detecting the neural activity between different task conditions. However, the estimated model parameters corresponding to the orthogonal basis functions have different physical meanings. It remains unclear how to combine the neural features detected by the two basis functions and construct t contrasts for further analyses. In this paper, we have proposed a novel method for representing multiple basis functions in complex domain to model the task-driven fMRI data. Using this method, we can treat each pair of model parameters, corresponding respectively to canonical HRF and its temporal derivative, as one complex number for each task condition. Using the specific rule we have defined, we can conveniently perform arithmetical operations on the estimated model parameters and generate different t contrasts. We validate this method using the fMRI data acquired from twenty-two healthy participants who underwent an auditory stimulation task.
Tian, Fenghua; Liu, Hanli
2014-01-15
One of the main challenges in functional diffuse optical tomography (DOT) is to accurately recover the depth of brain activation, which is even more essential when differentiating true brain signals from task-evoked artifacts in the scalp. Recently, we developed a depth-compensated algorithm (DCA) to minimize the depth localization error in DOT. However, the semi-infinite model that was used in DCA deviated significantly from the realistic human head anatomy. In the present work, we incorporated depth-compensated DOT (DC-DOT) with a standard anatomical atlas of human head. Computer simulations and human measurements of sensorimotor activation were conducted to examine and prove the depth specificity and quantification accuracy of brain atlas-based DC-DOT. In addition, node-wise statistical analysis based on the general linear model (GLM) was also implemented and performed in this study, showing the robustness of DC-DOT that can accurately identify brain activation at the correct depth for functional brain imaging, even when co-existing with superficial artifacts.
NASA Astrophysics Data System (ADS)
Gutiérrez Frez, Luis; Pantoja, José
2015-09-01
We construct a complex linear Weil representation ρ of the generalized special linear group G={SL}_*^{1}(2,A_n) (A_n=K[x]/< x^nrangle , K the quadratic extension of the finite field k of q elements, q odd), where A_n is endowed with a second class involution. After the construction of a specific data, the representation is defined on the generators of a Bruhat presentation of G, via linear operators satisfying the relations of the presentation. The structure of a unitary group U associated to G is described. Using this group we obtain a first decomposition of ρ.
2010-01-01
Background Near-infrared spectroscopy (NIRS) is a non-invasive neuroimaging technique that recently has been developed to measure the changes of cerebral blood oxygenation associated with brain activities. To date, for functional brain mapping applications, there is no standard on-line method for analysing NIRS data. Methods In this paper, a novel on-line NIRS data analysis framework taking advantages of both the general linear model (GLM) and the Kalman estimator is devised. The Kalman estimator is used to update the GLM coefficients recursively, and one critical coefficient regarding brain activities is then passed to a t-statistical test. The t-statistical test result is used to update a topographic brain activation map. Meanwhile, a set of high-pass filters is plugged into the GLM to prevent very low-frequency noises, and an autoregressive (AR) model is used to prevent the temporal correlation caused by physiological noises in NIRS time series. A set of data recorded in finger tapping experiments is studied using the proposed framework. Results The obtained results suggest that the method can effectively track the task related brain activation areas, and prevent the noise distortion in the estimation while the experiment is running. Thereby, the potential of the proposed method for real-time NIRS-based brain imaging was demonstrated. Conclusions This paper presents a novel on-line approach for analysing NIRS data for functional brain mapping applications. This approach demonstrates the potential of a real-time-updating topographic brain activation map. PMID:21138595
NASA Astrophysics Data System (ADS)
Kim, Y.; Katz, R. W.; Rajagopalan, B.; Podesta, G. P.
2009-12-01
Climate forecasts and climate change scenarios are typically provided in the form of monthly or seasonally aggregated totals or means. But time series of daily weather (e.g., precipitation amount, minimum and maximum temperature) are commonly required for use in agricultural decision-making. Stochastic weather generators constitute one technique to temporally downscale such climate information. The recently introduced approach for stochastic weather generators, based generalized linear modeling (GLM), is convenient for this purpose, especially with covariates to account for seasonality and teleconnections (e.g., with the El Niño phenomenon). Yet one important limitation of stochastic weather generators is a marked tendency to underestimate the observed interannual variance of seasonally aggregated variables. To reduce this “overdispersion” phenomenon, we incorporate time series of seasonal total precipitation and seasonal mean minimum and maximum temperature in the GLM weather generator as covariates. These seasonal time series are smoothed using locally weighted scatterplot smoothing (LOESS) to avoid introducing underdispersion. Because the aggregate variables appear explicitly in the weather generator, downscaling to daily sequences can be readily implemented. The proposed method is applied to time series of daily weather at Pergamino and Pilar in the Argentine Pampas. Seasonal precipitation and temperature forecasts produced by the International Research Institute for Climate and Society (IRI) are used as prototypes. In conjunction with the GLM weather generator, a resampling scheme is used to translate the uncertainty in the seasonal forecasts (the IRI format only specifies probabilities for three categories: below normal, near normal, and above normal) into the corresponding uncertainty for the daily weather statistics. The method is able to generate potentially useful shifts in the probability distributions of seasonally aggregated precipitation and
NASA Astrophysics Data System (ADS)
Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.
2012-05-01
The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).
Rogers, Katherine D; Young, Alys; Lovell, Karina; Campbell, Malcolm; Scott, Paul R; Kendal, Sarah
2013-01-01
The present study is aimed to translate 3 widely used clinical assessment measures into British Sign Language (BSL), to pilot the BSL versions, and to establish their validity and reliability. These were the Patient Health Questionnaire (PHQ-9), the Generalized Anxiety Disorder 7-item (GAD-7) scale, and the Work and Social Adjustment Scale (WSAS). The 3 assessment measures were translated into BSL and piloted with the Deaf signing population in the United Kingdom (n = 113). Participants completed the PHQ-9, GAD-7, WSAS, and Clinical Outcomes in Routine Evaluation-Outcome Measure (CORE-OM) online. The reliability and validity of the BSL versions of PHQ-9, GAD-7, and WSAS have been examined and were found to be good. The construct validity for the PHQ-9 BSL version did not find the single-factor solution as found in the hearing population. The BSL versions of PHQ-9, GAD-7, and WSAS have been produced in BSL and can be used with the signing Deaf population in the United Kingdom. This means that now there are accessible mental health assessments available for Deaf people who are BSL users, which could assist in the early identification of mental health difficulties.
ERIC Educational Resources Information Center
Chen, Haiwen
2012-01-01
In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…
ERIC Educational Resources Information Center
Bashaw, W. L., Ed.; Findley, Warren G., Ed.
This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali
2015-01-01
This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…
... structural alignment and improve your body's physical function. Low back pain, neck pain and headache are the most common ... treated. Chiropractic adjustment can be effective in treating low back pain, although much of the research done shows only ...
... from other people Skipped heartbeats and other physical complaints Trembling or twitching To have adjustment disorder, you ... ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer Support Get email updates Subscribe to RSS Follow ...
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
Cadmium-hazard mapping using a general linear regression model (Irr-Cad) for rapid risk assessment.
Simmons, Robert W; Noble, Andrew D; Pongsakul, P; Sukreeyapongse, O; Chinabut, N
2009-02-01
Research undertaken over the last 40 years has identified the irrefutable relationship between the long-term consumption of cadmium (Cd)-contaminated rice and human Cd disease. In order to protect public health and livelihood security, the ability to accurately and rapidly determine spatial Cd contamination is of high priority. During 2001-2004, a General Linear Regression Model Irr-Cad was developed to predict the spatial distribution of soil Cd in a Cd/Zn co-contaminated cascading irrigated rice-based system in Mae Sot District, Tak Province, Thailand (Longitude E 98 degrees 59'-E 98 degrees 63' and Latitude N 16 degrees 67'-16 degrees 66'). The results indicate that Irr-Cad accounted for 98% of the variance in mean Field Order total soil Cd. Preliminary validation indicated that Irr-Cad 'predicted' mean Field Order total soil Cd, was significantly (p < 0.001) correlated (R (2) = 0.92) with 'observed' mean Field Order total soil Cd values. Field Order is determined by a given field's proximity to primary outlets from in-field irrigation channels and subsequent inter-field irrigation flows. This in turn determines Field Order in Irrigation Sequence (Field Order(IS)). Mean Field Order total soil Cd represents the mean total soil Cd (aqua regia-digested) for a given Field Order(IS). In 2004-2005, Irr-Cad was utilized to evaluate the spatial distribution of total soil Cd in a 'high-risk' area of Mae Sot District. Secondary validation on six randomly selected field groups verified that Irr-Cad predicted mean Field Order total soil Cd and was significantly (p < 0.001) correlated with the observed mean Field Order total soil Cd with R (2) values ranging from 0.89 to 0.97. The practical applicability of Irr-Cad is in its minimal input requirements, namely the classification of fields in terms of Field Order(IS), strategic sampling of all primary fields and laboratory based determination of total soil Cd (T-Cd(P)) and the use of a weighed coefficient for Cd (Coeff
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model.
1975-04-15
paper is a compromise in the same nature as the 2SLS. We use the Moore - Penrose (MP) generalized inverse to... Moore - Penrose generalized inverse ; Indirect Least Squares; 1’wo Stage Least Squares; Instrumental Variables; Limited Information Maximum L-..clihood...Abstract -In this paper , we propose a procedure based on the use of the Moore - Penrose inverse of matrices for deriving unique Indirect Least Squares
Energetics of geostrophic adjustment in rotating flow
NASA Astrophysics Data System (ADS)
Fang, J.; Wu, R. S.
2002-09-01
Energetics of geostrophic adjustment in rotating how is examined in detail with a linear shallow water model. The Initial Unbalanced flow considered first falls under two classes. The first is similar to that adopted by Gill and is here referred to as it mass imbalance model, for the flow is initially motionless but with a sea surface displacement. The other is the same as that considered by Rossby and is referred to as I momentum imbalance model since there is only a velocity perturbation in the initial field. The significant feature of the energetics of geostrophic adjustment for the above two extreme models is that althongh the energy conversion ratio has a large case-to-case variability for different initial conditions, Its value is bounded below by 0 and above by 1 / 2. Based on the discussion of the above extreme models, the energetics of adjustment for an arbitrary initial condition is investigated. It is found that the characteristics of the energetics of geostrophic adjustment mentioned above are also applicable to adjustment of the general unbalanced flow under the condition that the energy conversion ratio is redefined as the conversion ratio between the change of kinetic energy and potential energy of the deviational fields.
Jäntschi, Lorentz
2016-01-01
Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected. PMID:28090215
Jäntschi, Lorentz; Bálint, Donatella; Bolboacă, Sorana D
2016-01-01
Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.
Kim, Hyunwoo J.; Adluru, Nagesh; Collins, Maxwell D.; Chung, Moo K.; Bendlin, Barbara B.; Johnson, Sterling C.; Davidson, Richard J.; Singh, Vikas
2014-01-01
Linear regression is a parametric model which is ubiquitous in scientific analysis. The classical setup where the observations and responses, i.e., (xi, yi) pairs, are Euclidean is well studied. The setting where yi is manifold valued is a topic of much interest, motivated by applications in shape analysis, topic modeling, and medical imaging. Recent work gives strategies for max-margin classifiers, principal components analysis, and dictionary learning on certain types of manifolds. For parametric regression specifically, results within the last year provide mechanisms to regress one real-valued parameter, xi ∈ R, against a manifold-valued variable, yi ∈ . We seek to substantially extend the operating range of such methods by deriving schemes for multivariate multiple linear regression —a manifold-valued dependent variable against multiple independent variables, i.e., f : Rn → . Our variational algorithm efficiently solves for multiple geodesic bases on the manifold concurrently via gradient updates. This allows us to answer questions such as: what is the relationship of the measurement at voxel y to disease when conditioned on age and gender. We show applications to statistical analysis of diffusion weighted images, which give rise to regression tasks on the manifold GL(n)/O(n) for diffusion tensor images (DTI) and the Hilbert unit sphere for orientation distribution functions (ODF) from high angular resolution acquisition. The companion open-source code is available on nitrc.org/projects/riem_mglm. PMID:25580070
Harry, H.H.
1988-03-11
Abstract and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus. 3 figs.
Harry, Herbert H.
1989-01-01
Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.
NASA Astrophysics Data System (ADS)
Aragón, J. L.; Vázquez Polo, G.; Gómez, A.
A computational algorithm for the generation of quasiperiodic tiles based on the cut and projection method is presented. The algorithm is capable of projecting any type of lattice embedded in any euclidean space onto any subspace making it possible to generate quasiperiodic tiles with any desired symmetry. The simplex method of linear programming and the Moore-Penrose generalized inverse are used to construct the cut (strip) in the higher dimensional space which is to be projected.
NASA Astrophysics Data System (ADS)
Tsuboi, Zengo
2013-05-01
In [1] (Z. Tsuboi, Nucl. Phys. B 826 (2010) 399, arxiv:arXiv:0906.2039), we proposed Wronskian-like solutions of the T-system for [ M , N ]-hook of the general linear superalgebra gl (M | N). We have generalized these Wronskian-like solutions to the ones for the general T-hook, which is a union of [M1 ,N1 ]-hook and [M2 ,N2 ]-hook (M =M1 +M2, N =N1 +N2). These solutions are related to Weyl-type supercharacter formulas of infinite dimensional unitarizable modules of gl (M | N). Our solutions also include a Wronskian-like solution discussed in [2] (N. Gromov, V. Kazakov, S. Leurent, Z. Tsuboi, JHEP 1101 (2011) 155, arxiv:arXiv:1010.2720) in relation to the AdS5 /CFT4 spectral problem.
NASA Technical Reports Server (NTRS)
Ustino, Eugene A.
2006-01-01
This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1987-01-01
During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.
Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta
2014-11-14
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.
Agogo, George O
2017-01-01
Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method.
Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng
2017-02-19
Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felicio Bruzzi
2016-11-01
Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.
Van Den Berg, W; Rossing, W A H
2005-03-01
In 1-year experiments, the final population density of nematodes is usually modeled as a function of initial density. Often, estimation of the parameters is precarious because nematode measurements, although laborious and expensive, are imprecise and the range in initial densities may be small. The estimation procedure can be improved by using orthogonal regression with a parameter for initial density on each experimental unit. In multi-year experiments parameters of a dynamic model can be estimated with optimization techniques like simulated annealing or Bayesian methods such as Markov chain Monte Carlo (MCMC). With these algorithms information from different experiments can be combined. In multi-year dynamic models, the stability of the steady states is an important issue. With chaotic dynamics, prediction of densities and associated economic loss will be possible only on a short timescale. In this study, a generic model was developed that describes population dynamics in crop rotations. Mathematical analysis showed stable steady states do exist for this dynamic model. Using the Metropolis algorithm, the model was fitted to data from a multi-year experiment on Pratylenchus penetrans dynamics with treatments that varied between years. For three crops, parameters for a yield loss assessment model were available and gross margin of the six possible rotations comprising these three crops and a fallow year were compared at the steady state of nematode density. Sensitivity of mean gross margin to changes in the parameter estimates was investigated. We discuss the general applicability of the dynamic rotation model and the opportunities arising from combination of the model with Bayesian calibration techniques for more efficient utilization and collection of data relevant for economic evaluation of crop rotations.
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 3 2012-04-01 2012-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 3 2013-04-01 2013-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
NASA Astrophysics Data System (ADS)
Vossos, Spyridon; Vossos, Elias
2016-08-01
closed LSTT is reduced, if one RIO has small velocity wrt another RIO. Thus, we have infinite number of closed LSTTs, each one with the corresponding SR theory. In case that we relate accelerated observers with variable metric of spacetime, we have the case of General Relativity (GR). For being that clear, we produce a generalized Schwarzschild metric, which is in accordance with any SR based on this closed complex LSTT and Einstein equations. The application of this kind of transformations to the SR and GR is obvious. But, the results may be applied to any linear space of dimension four endowed with steady or variable metric, whose elements (four- vectors) have spatial part (vector) with Euclidean metric.
Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo
2015-01-01
Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131
Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo
2015-08-05
Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods.
Larson, Michael J; Clayson, Peter E; Keith, Cierra M; Hunt, Isaac J; Hedges, Dawson W; Nielsen, Brent L; Call, Vaughn R A
2016-03-01
Older adults display alterations in neural reflections of conflict-related processing. We examined response times (RTs), error rates, and event-related potential (ERP; N2 and P3 components) indices of conflict adaptation (i.e., congruency sequence effects) a cognitive control process wherein previous-trial congruency influences current-trial performance, along with post-error slowing, correct-related negativity (CRN), error-related negativity (ERN) and error positivity (Pe) amplitudes in 65 healthy older adults and 94 healthy younger adults. Older adults showed generalized slowing, had decreased post-error slowing, and committed more errors than younger adults. Both older and younger adults showed conflict adaptation effects; magnitude of conflict adaptation did not differ by age. N2 amplitudes were similar between groups; younger, but not older, adults showed conflict adaptation effects for P3 component amplitudes. CRN and Pe, but not ERN, amplitudes differed between groups. Data support generalized declines in cognitive control processes in older adults without specific deficits in conflict adaptation.
Odille, Fabrice G J; Jónsson, Stefán; Stjernqvist, Susann; Rydén, Tobias; Wärnmark, Kenneth
2007-01-01
A general mathematical model for the characterization of the dynamic (kinetically labile) association of supramolecular assemblies in solution is presented. It is an extension of the equal K (EK) model by the stringent use of linear algebra to allow for the simultaneous presence of an unlimited number of different units in the resulting assemblies. It allows for the analysis of highly complex dynamic equilibrium systems in solution, including both supramolecular homo- and copolymers without the recourse to extensive approximations, in a field in which other analytical methods are difficult. The derived mathematical methodology makes it possible to analyze dynamic systems such as supramolecular copolymers regarding for instance the degree of polymerization, the distribution of a given monomer in different copolymers as well as its position in an aggregate. It is to date the only general means to characterize weak supramolecular systems. The model was fitted to NMR dilution titration data by using the program Matlab, and a detailed algorithm for the optimization of the different parameters has been developed. The methodology is applied to a case study, a hydrogen-bonded supramolecular system, salen 4+porphyrin 5. The system is formally a two-component system but in reality a three-component system. This results in a complex dynamic system in which all monomers are associated to each other by hydrogen bonding with different association constants, resulting in homo- and copolymers 4n5m as well as cyclic structures 6 and 7, in addition to free 4 and 5. The system was analyzed by extensive NMR dilution titrations at variable temperatures. All chemical shifts observed at different temperatures were used in the fitting to obtain the DeltaH degrees and DeltaS degrees values producing the best global fit. From the derived general mathematical expressions, system 4+5 could be characterized with respect to above-mentioned parameters.
Shevenell, L.A.; Beauchamp, J.J.
1994-11-01
Several waste disposal sites are located on or adjacent to the karstic Maynardville Limestone (Cmn) and the Copper Ridge Dolomite (Ccr) at the Oak Ridge Y-12 Plant. These formations receive contaminants in groundwaters from nearby disposal sites, which can be transported quite rapidly due to the karst flow system. In order to evaluate transport processes through the karst aquifer, the solutional aspects of the formations must be characterized. As one component of this characterization effort, statistical analyses were conducted on the data related to cavities in order to determine if a suitable model could be identified that is capable of predicting the probability of cavity size or distribution in locations for which drilling data are not available. Existing data on the locations (East, North coordinates), depths (and elevations), and sizes of known conduits and other water zones were used in the analyses. Two different models were constructed in the attempt to predict the distribution of cavities in the vicinity of the Y-12 Plant: General Linear Models (GLM), and Logistic Regression Models (LOG). Each of the models attempted was very sensitive to the data set used. Models based on subsets of the full data set were found to do an inadequate job of predicting the behavior of the full data set. The fact that the Ccr and Cmn data sets differ significantly is not surprising considering the hydrogeology of the two formations differs. Flow in the Cmn is generally at elevations between 600 and 950 ft and is dominantly strike parallel through submerged, partially mud-filled cavities with sizes up to 40 ft, but more typically less than 5 ft. Recognized flow in the Ccr is generally above 950 ft elevation, with flow both parallel and perpendicular to geologic strike through conduits, which tend to be large than those on the Cnm, and are often not fully saturated at the shallower depths.
Nonlinear Hydrostatic Adjustment.
NASA Astrophysics Data System (ADS)
Bannon, Peter R.
1996-12-01
The final equilibrium state of Lamb's hydrostatic adjustment problem is found for finite amplitude heating. Lamb's problem consists of the response of a compressible atmosphere to an instantaneous, horizontally homogeneous heating. Results are presented for both isothermal and nonisothermal atmospheres.As in the linear problem, the fluid displacements are confined to the heated layer and to the region aloft with no displacement of the fluid below the heating. The region above the heating is displaced uniformly upward for heating and downward for cooling. The amplitudes of the displacements are larger for cooling than for warming.Examination of the energetics reveals that the fraction of the heat deposited into the acoustic modes increases linearly with the amplitude of the heating. This fraction is typically small (e.g., 0.06% for a uniform warming of 1 K) and is essentially independent of the lapse rate of the base-state atmosphere. In contrast a fixed fraction of the available energy generated by the heating goes into the acoustic modes. This fraction (e.g., 12% for a standard tropospheric lapse rate) agrees with the linear result and increases with increasing stability of the base-state atmosphere.The compressible results are compared to solutions using various forms of the soundproof equations. None of the soundproof equations predict the finite amplitude solutions accurately. However, in the small amplitude limit, only the equations for deep convection advanced by Dutton and Fichtl predict the thermodynamic state variables accurately for a nonisothermal base-state atmosphere.
Bacheler, N.M.; Hightower, J.E.; Burdick, S.M.; Paramore, L.M.; Buckel, J.A.; Pollock, K.H.
2010-01-01
Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated. ?? 2009 Elsevier B.V.
Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.
2010-01-01
Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.
NASA Astrophysics Data System (ADS)
Pulquério, Mário; Garrett, Pedro; Santos, Filipe Duarte; Cruz, Maria João
2015-04-01
Portugal is on a climate change hot spot region, where precipitation is expected to decrease with important impacts regarding future water availability. As one of the European countries affected more by droughts in the last decades, it is important to assess how future precipitation regimes will change in order to study its impacts on water resources. Due to the coarse scale of global circulation models, it is often needed to downscale climate variables to the regional or local scale using statistical and/or dynamical techniques. In this study, we tested the use of a generalized linear model, as implemented in the program GLIMCLIM, to downscale precipitation for the center of Portugal where the Tagus basin is located. An analysis of the method performance is done as well as an evaluation of future precipitation trends and extremes for the twenty-first century. Additionally, we perform the first analysis of the evolution of droughts in climate change scenarios by the Standardized Precipitation Index in the study area. Results show that GLIMCLIM is able to capture the precipitation's interannual variation and seasonality correctly. However, summer precipitation is considerably overestimated. Additionally, precipitation extremes are in general well recovered, but high daily rainfall may be overestimated, and dry spell lengths are not correctly recovered by the model. Downscaled projections show a reduction in precipitation between 19 and 28 % at the end of the century. Results indicate that precipitation extremes will decrease and the magnitude of droughts can increase up to three times in relation to the 1961-1990 period which can have strong ecological, social, and economic impacts.
NASA Astrophysics Data System (ADS)
Fujii, Y.; Nakano, T.; Usui, N.; Matsumoto, S.; Tsujino, H.; Kamachi, M.
2014-12-01
This study develops a strategy for tracing a target water mass, and applies it to analyzing the pathway of the North Pacific Intermediate Water (NPIW) from the subarctic gyre to the northwestern part of the subtropical gyre south of Japan in a simulation of an ocean general circulation model. This strategy estimates the pathway of the water mass that travels from an origin to a destination area during a specific period using a conservation property concerning tangent linear and adjoint models. In our analysis, a large fraction of the low salinity origin water mass of NPIW initially comes from the Okhotsk or Bering Sea, flows through the southeastern side of the Kuril Islands, and is advected to the Mixed Water Region (MWR) by the Oyashio current. It then enters the Kuroshio Extension (KE) at the first KE ridge, and is advected eastward by the KE current. However, it deviates southward from the KE axis around 158°E over the Shatsky Rise, or around 170ºE on the western side of the Emperor Seamount Chain, and enters the subtropical gyre. It is finally transported westward by the recirculation flow. This pathway corresponds well to the shortcut route of NPIW from MWR to the region south of Japan inferred from analysis of the long-term freshening trend of NPIW observation.
Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W
2013-01-01
A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task.
Wen, Xuesong; Li, Jing; Dickson, James S
2014-10-01
Translocation of foodborne pathogens into the interior tissues of pork through moisture enhancement may be of concern if the meat is undercooked. In the present study, a five-strain mixture of Campylobacter jejuni or Salmonella enterica Typhimurium was evenly spread on the surface of fresh pork loins. Pork loins were injected, sliced, vacuum packaged, and stored. After storage, sliced pork was cooked by traditional grilling. Survival of Salmonella Typhimurium and C. jejuni in the interior tissues of the samples were analyzed by enumeration. The populations of these pathogens dropped below the detection limit (10 colony-forming units/g) in most samples that were cooked to 71.1°C or above. The general linear mixed model procedure was used to model the association between risk factors and the presence/absence of these pathogens after cooking. Estimated regression coefficients associated with the fixed effects indicated that the recovery probability of Salmonella Typhimurium was negatively associated with increasing level of enhancement. The effects of moisture enhancement and cooking on the recovery probability of C. jejuni were moderated by storage temperature. Our findings will assist food processors and regulatory agencies with science-based evaluation of the current processing, storage condition, and cooking guideline for moisture-enhanced pork.
NASA Astrophysics Data System (ADS)
Fujii, Yosuke; Nakano, Toshiya; Usui, Norihisa; Matsumoto, Satoshi; Tsujino, Hiroyuki; Kamachi, Masafumi
2013-04-01
This study develops a strategy for tracing a target water mass, and applies it to analyzing the pathway of the North Pacific Intermediate Water (NPIW) from the subarctic gyre to the northwestern part of the subtropical gyre south of Japan in a simulation of an ocean general circulation model. This strategy estimates the pathway of the water mass that travels from an origin to a destination area during a specific period using a conservation property concerning tangent linear and adjoint models. In our analysis, a large fraction of the low salinity origin water mass of NPIW initially comes from the Okhotsk or Bering Sea, flows through the southeastern side of the Kuril Islands, and is advected to the Mixed Water Region (MWR) by the Oyashio current. It then enters the Kuroshio Extension (KE) at the first KE ridge, and is advected eastward by the KE current. However, it deviates southward from the KE axis around 158°E over the Shatsky Rise, or around 170°E on the western side of the Emperor Seamount Chain, and enters the subtropical gyre. It is finally transported westward by the recirculation flow. This pathway corresponds well to the shortcut route of NPIW from MWR to the region south of Japan inferred from analysis of the long-term freshening trend of NPIW observation. Copyright 2013 John Wiley & Sons, Ltd.
Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy
2016-06-01
Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion.
Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M. Pilar
2016-01-01
The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113
Lin, Zi-Jing; Li, Lin; Cazzell, Mary; Liu, Hanli
2014-01-01
Diffuse optical tomography (DOT) is a variant of functional near infrared spectroscopy and has the capability of mapping or reconstructing three dimensional (3D) hemodynamic changes due to brain activity. Common methods used in DOT image analysis to define brain activation have limitations because the selection of activation period is relatively subjective. General linear model (GLM)-based analysis can overcome this limitation. In this study, we combine the atlas-guided 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with risk decision-making processes. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The Balloon Analog Risk Task (BART) is a valid experimental model and has been commonly used to assess human risk-taking actions and tendencies while facing risks. We have used the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making from 37 human participants (22 males and 15 females). Voxel-wise GLM analysis was performed after a human brain atlas template and a depth compensation algorithm were combined to form atlas-guided DOT images. In this work, we wish to demonstrate the excellence of using voxel-wise GLM analysis with DOT to image and study cognitive functions in response to risk decision-making. Results have shown significant hemodynamic changes in the dorsal lateral prefrontal cortex (DLPFC) during the active-choice mode and a different activation pattern between genders; these findings correlate well with published literature in functional magnetic resonance imaging (fMRI) and fNIRS studies. PMID:24619964
Forstmeier, Wolfgang
2011-11-01
General linear models (GLM) have become such universal tools of statistical inference, that their applicability to a particular data set is rarely questioned. These models are designed to minimize residuals along the y-axis, while assuming that the predictor (x-axis) is free of statistical noise (ordinary least square regression, OLS). However, in practice, this assumption is often violated, which can lead to erroneous conclusions, particularly when two predictors are correlated with each other. This is best illustrated by two examples from the study of allometry, which have received great interest: (1) the question of whether men or women have relatively larger brains after accounting for body size differences, and (2) whether men indeed have shorter index fingers relative to ring fingers (digit ratio) than women. In depth analysis of these examples clearly shows that GLMs produce spurious sexual dimorphism in body shape where there is none (e.g. relative brain size). Likewise, they may fail to detect existing sexual dimorphisms in which the larger sex has the lower trait values (e.g. digit ratio) and, conversely, tend to exaggerate sexual dimorphism in which the larger sex has the relatively larger trait value (e.g. most sexually selected traits). These artifacts can be avoided with reduced major axis regression (RMA), which simultaneously minimizes residuals along both the x and the y-axis. Alternatively, in cases where isometry can be established there are no objections against and good reasons for the continued use of ratios as a simple means of correcting for size differences.
Ishii, Hideaki; Shimada, Miki; Yamaguchi, Hiroaki; Mano, Nariyasu
2016-11-01
We applied a new technique for quantitative linear range shift using in-source collision-induced dissociation (CID) to complex biological fluids to demonstrate its utility. The technique was used in a simultaneous quantitative determination method of 5-fluorouracil (5-FU), an anticancer drug for various solid tumors, and its metabolites in human plasma by liquid chromatography-electrospray ionization-tandem mass spectrometry (LC/ESI-MS/MS). To control adverse effects after administration of 5-FU, it is important to monitor the plasma concentration of 5-FU and its metabolites; however, no simultaneous determination method has yet been reported because of vastly different physical and chemical properties of compounds. We developed a new analytical method for simultaneously determining 5-FU and its metabolites in human plasma by LC/ESI-MS/MS coupled with the technique for quantitative linear range shift using in-source CID. Hydrophilic interaction liquid chromatography using a stationary phase with zwitterionic functional groups, phosphorylcholine, was suitable for separation of 5-FU from its nucleoside and interfering endogenous materials. The addition of glycerin into acetonitrile-rich eluent after LC separation improved the ESI-MS response of high polar analytes. Based on the validation results, linear range shifts by in-source CID is the reliable technique even with complex biological samples such as plasma. Copyright © 2016 John Wiley & Sons Ltd.
Leite-Martins, Liliana R; Mahú, Maria I M; Costa, Ana L; Mendes, Angelo; Lopes, Elisabete; Mendonça, Denisa M V; Niza-Ribeiro, João J R; de Matos, Augusto J F; da Costa, Paulo Martins
2014-11-01
Antimicrobial resistance (AMR) is a growing global public health problem, which is caused by the use of antimicrobials in both human and animal medical practice. The objectives of the present cross-sectional study were as follows: (1) to determine the prevalence of resistance in Escherichia coli isolated from the feces of pets from the Porto region of Portugal against 19 antimicrobial agents and (2) to assess the individual, clinical and environmental characteristics associated with each pet as risk markers for the AMR of the E. coli isolates. From September 2009 to May 2012, rectal swabs were collected from pets selected using a systematic random procedure from the ordinary population of animals attending the Veterinary Hospital of Porto University. A total of 78 dogs and 22 cats were sampled with the objective of isolating E. coli. The animals' owners, who allowed the collection of fecal samples from their pets, answered a questionnaire to collect information about the markers that could influence the AMR of the enteric E. coli. Chromocult tryptone bile X-glucuronide agar was used for E. coli isolation, and the disk diffusion method was used to determine the antimicrobial susceptibility. The data were analyzed using a multilevel, univariable and multivariable generalized linear mixed model (GLMM). Several (49.7%) of the 396 isolates obtained in this study were multidrug-resistant. The E. coli isolates exhibited resistance to the antimicrobial agent's ampicillin (51.3%), cephalothin (46.7%), tetracycline (45.2%) and streptomycin (43.4%). Previous quinolone treatment was the main risk marker for the presence of AMR for 12 (ampicillin, cephalothin, ceftazidime, cefotaxime, nalidixic acid, ciprofloxacin, gentamicin, tetracycline, streptomycin, chloramphenicol, trimethoprim-sulfamethoxazole and aztreonam) of the 15 antimicrobials assessed. Coprophagic habits were also positively associated with an increased risk of AMR for six drugs, ampicillin, amoxicillin
CALMAR: A New Versatile Code Library for Adjustment from Measurements
NASA Astrophysics Data System (ADS)
Grégoire, G.; Fausser, C.; Destouches, C.; Thiollay, N.
2016-02-01
CALMAR, a new library for adjustment has been developed. This code performs simultaneous shape and level adjustment of an initial prior spectrum from measured reactions rates of activation foils. It is written in C++ using the ROOT data analysis framework,with all linear algebra classes. STAYSL code has also been reimplemented in this library. Use of the code is very flexible : stand-alone, inside a C++ code, or driven by scripts. Validation and test cases are under progress. Theses cases will be included in the code package that will be available to the community. Future development are discussed. The code should support the new Generalized Nuclear Data (GND) format. This new format has many advantages compared to ENDF.
The fully nonlinear stratified geostrophic adjustment problem
NASA Astrophysics Data System (ADS)
Coutino, Aaron; Stastna, Marek
2017-01-01
The study of the adjustment to equilibrium by a stratified fluid in a rotating reference frame is a classical problem in geophysical fluid dynamics. We consider the fully nonlinear, stratified adjustment problem from a numerical point of view. We present results of smoothed dam break simulations based on experiments in the published literature, with a focus on both the wave trains that propagate away from the nascent geostrophic state and the geostrophic state itself. We demonstrate that for Rossby numbers in excess of roughly 2 the wave train cannot be interpreted in terms of linear theory. This wave train consists of a leading solitary-like packet and a trailing tail of dispersive waves. However, it is found that the leading wave packet never completely separates from the trailing tail. Somewhat surprisingly, the inertial oscillations associated with the geostrophic state exhibit evidence of nonlinearity even when the Rossby number falls below 1. We vary the width of the initial disturbance and the rotation rate so as to keep the Rossby number fixed, and find that while the qualitative response remains consistent, the Froude number varies, and these variations are manifested in the form of the emanating wave train. For wider initial disturbances we find clear evidence of a wave train that initially propagates toward the near wall, reflects, and propagates away from the geostrophic state behind the leading wave train. We compare kinetic energy inside and outside of the geostrophic state, finding that for long times a Rossby number of around one-quarter yields an equal split between the two, with lower (higher) Rossby numbers yielding more energy in the geostrophic state (wave train). Finally we compare the energetics of the geostrophic state as the Rossby number varies, finding long-lived inertial oscillations in the majority of the cases and a general agreement with the past literature that employed either hydrostatic, shallow-water equation-based theory or
7 CFR 3.91 - Adjusted civil monetary penalties.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 1 2010-01-01 2010-01-01 false Adjusted civil monetary penalties. 3.91 Section 3.91 Agriculture Office of the Secretary of Agriculture DEBT MANAGEMENT Adjusted Civil Monetary Penalties § 3.91 Adjusted civil monetary penalties. (a) In general. (1) The Secretary will adjust the civil...
Alfonso, R; Belinchon, I
2001-01-01
Linear eruptions are sometimes associated with systemic diseases and they may also be induced by various drugs. Paradoxically, such acquired inflammatory skin diseases tend to follow the system of Blaschko's lines. We describe a case of unilateral linear drug eruption caused by ibuprofen, which later became bilateral and generalized.
Jiménez Blanco, José L; Bootello, Purificación; Ortiz Mellet, Carmen; Gutiérrez Gallego, Ricardo; García Fernández, José M
2004-01-07
A blockwise iterative synthetic strategy for the preparation of linear, dendritic and branched full-carbohydrate architectures has been developed by using sugar azido(carbamate) isothiocyanates as key templates; the presence of intersaccharide thiourea bridges provides anchoring points for hydrogen bond-directed molecular recognition of phosphate esters in water.
ERIC Educational Resources Information Center
Carlson, James E.
2014-01-01
Many aspects of the geometry of linear statistical models and least squares estimation are well known. Discussions of the geometry may be found in many sources. Some aspects of the geometry relating to the partitioning of variation that can be explained using a little-known theorem of Pappus and have not been discussed previously are the topic of…
NASA Astrophysics Data System (ADS)
Sidorin, Anatoly
2010-01-01
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Radiation phantom with humanoid shape and adjustable thickness
Lehmann, Joerg; Levy, Joshua; Stern, Robin L.; Siantar, Christine Hartmann; Goldberg, Zelanna
2006-12-19
A radiation phantom comprising a body with a general humanoid shape and at least a portion having an adjustable thickness. In one embodiment, the portion with an adjustable thickness comprises at least one tissue-equivalent slice.
40 CFR 1066.220 - Linearity verification.
Code of Federal Regulations, 2012 CFR
2012-07-01
...-squares linear regression and the linearity criteria specified in Table 1 of this section. (b) Performance requirements. If a measurement system does not meet the applicable linearity criteria in Table 1 of this... system at the specified temperatures and pressures. This may include any specified adjustment or...
16 CFR 1.98 - Adjustment of civil monetary penalty amounts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF PRACTICE GENERAL PROCEDURES Civil Penalty Adjustments Under the Federal Civil Penalties Inflation... monetary penalty amounts. This section makes inflation adjustments in the dollar amounts of civil...
Passler, Peter P; Hofer, Thomas S
2017-02-15
Stochastic dynamics is a widely employed strategy to achieve local thermostatization in molecular dynamics simulation studies; however, it suffers from an inherent violation of momentum conservation. Although this short-coming has little impact on structural and short-time dynamic properties, it can be shown that dynamics in the long-time limit such as diffusion is strongly dependent on the respective thermostat setting. Application of the methodically similar dissipative particle dynamics (DPD) provides a simple, effective strategy to ensure the advantages of local, stochastic thermostatization while at the same time the linear momentum of the system remains conserved. In this work, the key parameters to employ the DPD thermostats in the framework of periodic boundary conditions are investigated, in particular the dependence of the system properties on the size of the DPD-region as well as the treatment of forces near the cutoff. Structural and dynamical data for light and heavy water as well as a Lennard-Jones fluid have been compared to simulations executed via stochastic dynamics as well as via use of the widely employed Nose-Hoover chain and Berendsen thermostats. It is demonstrated that a small size of the DPD region is sufficient to achieve local thermalization, while at the same time artifacts in the self-diffusion characteristic for stochastic dynamics are eliminated. © 2016 Wiley Periodicals, Inc.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Article mounting and position adjustment stage
Cutburth, R.W.; Silva, L.L.
1988-05-10
An improved adjustment and mounting stage of the type used for the detection of laser beams is disclosed. A ring sensor holder has locating pins on a first side thereof which are positioned within a linear keyway in a surrounding housing for permitting reciprocal movement of the ring along the keyway. A rotatable ring gear is positioned within the housing on the other side of the ring from the linear keyway and includes an oval keyway which drives the ring along the linear keyway upon rotation of the gear. Motor-driven single-stage and dual (x, y) stage adjustment systems are disclosed which are of compact construction and include a large laser transmission hole. 6 figs.
Article mounting and position adjustment stage
Cutburth, Ronald W.; Silva, Leonard L.
1988-01-01
An improved adjustment and mounting stage of the type used for the detection of laser beams is disclosed. A ring sensor holder has locating pins on a first side thereof which are positioned within a linear keyway in a surrounding housing for permitting reciprocal movement of the ring along the keyway. A rotatable ring gear is positioned within the housing on the other side of the ring from the linear keyway and includes an oval keyway which drives the ring along the linear keyway upon rotation of the gear. Motor-driven single-stage and dual (x, y) stage adjustment systems are disclosed which are of compact construction and include a large laser transmission hole.
School-related adjustment in children and adolescents with CHD.
Im, Yu-Mi; Lee, Sunhee; Yun, Tae-Jin; Choi, Jae Young
2017-03-20
Advancements in medical and surgical treatment have increased the life expectancy of patients with CHD. Many patients with CHD, however, struggle with the medical, psychosocial, and behavioural challenges as they transition from childhood to adulthood. Specifically, the environmental and lifestyle challenges in school are very important factors that affect children and adolescents with CHD. This study aimed to evaluate school-related adjustments depending on school level and disclosure of disease in children and adolescents with CHD. This was a descriptive and exploratory study with 205 children and adolescents, aged 7-18 years, who were recruited from two congenital heart clinics from 5 January to 27 February, 2015. Data were analysed using the Student's t-test, analysis of variance, and a univariate general linear model. School-related adjustment scores were significantly different according to school level and disclosure of disease (p<0.001) when age, religion, experience being bullied, and parents' educational levels were assigned as covariates. The school-related adjustment score of patients who did not disclose their disease dropped significantly in high school. This indicated that it is important for healthcare providers to plan developmentally appropriate educational transition programmes for middle-school students with CHD in order for students to prepare themselves before entering high school.
Resonance Parameter Adjustment Based on Integral Experiments
Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; Forget, Benoit
2016-06-02
Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, such as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.
Resonance Parameter Adjustment Based on Integral Experiments
Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...
2016-06-02
Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less
Remotely Adjustable Hydraulic Pump
NASA Technical Reports Server (NTRS)
Kouns, H. H.; Gardner, L. D.
1987-01-01
Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.
NASA Technical Reports Server (NTRS)
Ashby, George C., Jr.; Robbins, W. Eugene; Horsley, Lewis A.
1991-01-01
Probe readily positionable in core of uniform flow in hypersonic wind tunnel. Formed of pair of mating cylindrical housings: transducer housing and pitot-tube housing. Pitot tube supported by adjustable wedge fairing attached to top of pitot-tube housing with semicircular foot. Probe adjusted both radially and circumferentially. In addition, pressure-sensing transducer cooled internally by water or other cooling fluid passing through annulus of cooling system.
Parametric Identification of Systems Via Linear Operators.
1978-09-01
A general parametric identification /approximation model is developed for the black box identification of linear time invariant systems in terms of... parametric identification techniques derive from the general model as special cases associated with a particular linear operator. Some possible
González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M
2013-01-01
In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
34 CFR 668.191 - New data adjustments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false New data adjustments. 668.191 Section 668.191 Education..., DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.191 New data adjustments. (a) Eligibility. You may request a new data adjustment for your most recent cohort...
34 CFR 668.210 - New data adjustments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false New data adjustments. 668.210 Section 668.210 Education..., DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.210 New data adjustments. (a) Eligibility. You may request a new data adjustment for your most recent cohort of...
12 CFR 1263.22 - Adjustments in stock holdings.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Adjustments in stock holdings. 1263.22 Section... Stock Requirements § 1263.22 Adjustments in stock holdings. (a) Adjustment in general. A Bank may from time to time increase or decrease the amount of stock any member is required to hold. (b)(1)...
12 CFR 1263.22 - Adjustments in stock holdings.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Adjustments in stock holdings. 1263.22 Section... Stock Requirements § 1263.22 Adjustments in stock holdings. (a) Adjustment in general. A Bank may from time to time increase or decrease the amount of stock any member is required to hold. (b)(1)...
12 CFR 925.22 - Adjustments in stock holdings.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Adjustments in stock holdings. 925.22 Section... ASSOCIATES MEMBERS OF THE BANKS Stock Requirements § 925.22 Adjustments in stock holdings. (a) Adjustment in general. A Bank may from time to time increase or decrease the amount of stock any member is required...
12 CFR 1263.22 - Adjustments in stock holdings.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Adjustments in stock holdings. 1263.22 Section... Stock Requirements § 1263.22 Adjustments in stock holdings. (a) Adjustment in general. A Bank may from time to time increase or decrease the amount of stock any member is required to hold. (b)(1)...
12 CFR 1263.22 - Adjustments in stock holdings.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Adjustments in stock holdings. 1263.22 Section... Stock Requirements § 1263.22 Adjustments in stock holdings. (a) Adjustment in general. A Bank may from time to time increase or decrease the amount of stock any member is required to hold. (b)(1)...
Burton, P; Gurrin, L; Sly, P
1998-06-15
Much of the research in epidemiology and clinical science is based upon longitudinal designs which involve repeated measurements of a variable of interest in each of a series of individuals. Such designs can be very powerful, both statistically and scientifically, because they enable one to study changes within individual subjects over time or under varied conditions. However, this power arises because the repeated measurements tend to be correlated with one another, and this must be taken into proper account at the time of analysis or misleading conclusions may result. Recent advances in statistical theory and in software development mean that studies based upon such designs can now be analysed more easily, in a valid yet flexible manner, using a variety of approaches which include the use of generalized estimating equations, and mixed models which incorporate random effects. This paper provides a particularly simple illustration of the use of these two approaches, taking as a practical example the analysis of a study which examined the response of portable peak expiratory flow meters to changes in true peak expiratory flow in 12 children with asthma. The paper takes the reader through the relevant practicalities of model fitting, interpretation and criticism and demonstrates that, in a simple case such as this, analyses based upon these model-based approaches produce reassuringly similar inferences to standard analyses based upon more conventional methods.
Recirculating valve lash adjuster
Stoody, R.R.
1987-02-24
This patent describes an internal combustion engine with a valve assembly of the type including overhead valves supported by a cylinder head for opening and closing movements in a substantially vertical direction and a rotatable overhead camshaft thereabove lubricated by engine oil pumped by an engine oil pump. A hydraulic lash adjuster with an internal reservoir therein is solely supplied with run-off lubricating oil from the camshaft which oil is pumped into the internal reservoir of the lash adjuster by self-pumping operation of the lash adjuster produced by lateral forces thereon by the rotative operation of the camshaft comprising: a housing of the lash adjuster including an axially extending bore therethrough with a lower wall means of the housing closing the lower end thereof; a first plunger member being closely slidably received in the bore of the housing and having wall means defining a fluid filled power chamber with the lower wall means of the housing; and a second plunger member of the lash adjuster having a portion being loosely slidably received and extending into the bore of the housing for reciprocation therein. Another portion extends upwardly from the housing to operatively receive alternating side-to-side force inputs from operation of the camshaft.
NASA Astrophysics Data System (ADS)
Wang, Shen; Yao, Xue Feng; Su, Yun Quan; Liu, Wei
2017-02-01
In this paper, the basic principle and application of linear gray scale adjustment method are investigated in high temperature digital image correlation (DIC) technology. First, the simple linear gray scale adjustment method is proposed, which can adjust the gray scale value of the saturated pixels and diminish the correlation error caused by the saturated pixels. Then, both the simulated high temperature images and DIC correlation results before and after the gray scale adjustment are provided and analyzed to verify its effectiveness, in which the displacement error decreased from 0.1 pixels to 0.04 pixels after the linear gray scale adjustment for high temperature images. Finally, the linear gray scale adjustment method is used to extract the displacement with high accuracy in high temperature experiment of SiC specimen, and the displacement error decreased from 0.5 pixels to 0.1 pixels after the linear gray scale adjustment.
Coverage-adjusted entropy estimation.
Vu, Vincent Q; Yu, Bin; Kass, Robert E
2007-09-20
Data on 'neural coding' have frequently been analyzed using information-theoretic measures. These formulations involve the fundamental and generally difficult statistical problem of estimating entropy. We review briefly several methods that have been advanced to estimate entropy and highlight a method, the coverage-adjusted entropy estimator (CAE), due to Chao and Shen that appeared recently in the environmental statistics literature. This method begins with the elementary Horvitz-Thompson estimator, developed for sampling from a finite population, and adjusts for the potential new species that have not yet been observed in the sample-these become the new patterns or 'words' in a spike train that have not yet been observed. The adjustment is due to I. J. Good, and is called the Good-Turing coverage estimate. We provide a new empirical regularization derivation of the coverage-adjusted probability estimator, which shrinks the maximum likelihood estimate. We prove that the CAE is consistent and first-order optimal, with rate O(P)(1/log n), in the class of distributions with finite entropy variance and that, within the class of distributions with finite qth moment of the log-likelihood, the Good-Turing coverage estimate and the total probability of unobserved words converge at rate O(P)(1/(log n)(q)). We then provide a simulation study of the estimator with standard distributions and examples from neuronal data, where observations are dependent. The results show that, with a minor modification, the CAE performs much better than the MLE and is better than the best upper bound estimator, due to Paninski, when the number of possible words m is unknown or infinite.
Eugster, Patrick; Sennhauser, Michèle; Zweifel, Peter
2010-07-01
When premiums are community-rated, risk adjustment (RA) serves to mitigate competitive insurers' incentive to select favorable risks. However, unless fully prospective, it also undermines their incentives for efficiency. By capping its volume, one may try to counteract this tendency, exposing insurers to some financial risk. This in term runs counter the quest to refine the RA formula, which would increase RA volume. Specifically, the adjuster, "Hospitalization or living in a nursing home during the previous year" will be added in Switzerland starting 2012. This paper investigates how to minimize the opportunity cost of capping RA in terms of increased incentives for risk selection.
Psychological Adjustment and Homosexuality.
ERIC Educational Resources Information Center
Gonsiorek, John C.
In this paper, the diverse literature bearing on the topic of homosexuality and psychological adjustment is critically reviewed and synthesized. The first chapter discusses the most crucial methodological issue in this area, the problem of sampling. The kinds of samples used to date are critically examined, and some suggestions for improved…
NASA Technical Reports Server (NTRS)
1986-01-01
Corning Glass Works' Serengeti Driver sunglasses are unique in that their lenses self-adjust and filter light while suppressing glare. They eliminate more than 99% of the ultraviolet rays in sunlight. The frames are based on the NASA Anthropometric Source Book.
Hunter, Steven L.
2002-01-01
An inclinometer utilizing synchronous demodulation for high resolution and electronic offset adjustment provides a wide dynamic range without any moving components. A device encompassing a tiltmeter and accompanying electronic circuitry provides quasi-leveled tilt sensors that detect highly resolved tilt change without signal saturation.
An adjustable solar concentrator
NASA Technical Reports Server (NTRS)
Collins, E. R., Jr.
1980-01-01
Fixed cylindrical converging lenses followed by movable parabolic mirror focus solar energy on conventional linear collector. System is low cost and accomodates daily and seasonal movements of the sun. Mirrors may be moved using simple, low-power electrical motors.
Linear Programming Problems for Generalized Uncertainty
ERIC Educational Resources Information Center
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
Linear derivative Cartan formulation of general relativity
NASA Astrophysics Data System (ADS)
Kummer, W.; Schütz, H.
2005-07-01
Beside diffeomorphism invariance also manifest SO(3,1) local Lorentz invariance is implemented in a formulation of Einstein gravity (with or without cosmological term) in terms of initially completely independent vielbein and spin connection variables and auxiliary two-form fields. In the systematic study of all possible embeddings of Einstein gravity into that formulation with auxiliary fields, the introduction of a “bi-complex” algebra possesses crucial technical advantages. Certain components of the new two-form fields directly provide canonical momenta for spatial components of all Cartan variables, whereas the remaining ones act as Lagrange multipliers for a large number of constraints, some of which have been proposed already in different, less radical approaches. The time-like components of the Cartan variables play that role for the Lorentz constraints and others associated to the vierbein fields. Although also some ternary ones appear, we show that relations exist between these constraints, and how the Lagrange multipliers are to be determined to take care of second class ones. We believe that our formulation of standard Einstein gravity as a gauge theory with consistent local Poincaré algebra is superior to earlier similar attempts.
Generalized Ultrametric Semilattices of Linear Signals
2014-01-23
second edition, 2001. [10] Robert C. Flagg and Ralph Kopperman. Continuity spaces: Reconciling domains and metric spaces. Theoretical Computer Science, 177...point. Theoretical Computer Science, 238(1-2):483–488, 2000. [31] Alan V. Oppenheim , Alan S. Willsky, and S. Hamid Nawab. Signals & Systems. Prentice
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
Romanticism and Marital Adjustment
ERIC Educational Resources Information Center
Spanier, Graham B.
1972-01-01
It is concluded that romanticism does not appear to be harmful to marriage relationships in particular or the family system in general, and is therefore not generally dysfunctional in our society. (Author)
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
Cutburth, Ronald W.; Silva, Leonard L.
1988-01-01
An improved mounting stage of the type used for the detection of laser beams is disclosed. A stage center block is mounted on each of two opposite sides by a pair of spaced ball bearing tracks which provide stability as well as simplicity. The use of the spaced ball bearing pairs in conjunction with an adjustment screw which also provides support eliminates extraneous stabilization components and permits maximization of the area of the center block laser transmission hole.
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds
These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.
Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.
NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
45 CFR 158.230 - Credibility adjustment.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS ISSUER....230 Credibility adjustment. (a) General rule. An issuer may add to the MLR calculated under § 158.221... based on partially credible experience as defined in paragraph (c)(2) of this section. An issuer may...
45 CFR 158.230 - Credibility adjustment.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS ISSUER....230 Credibility adjustment. (a) General rule. An issuer may add to the MLR calculated under § 158.221... based on partially credible experience as defined in paragraph (c)(2) of this section. An issuer may...
Parenting Practices, Child Adjustment, and Family Diversity.
ERIC Educational Resources Information Center
Amato, Paul R.; Fowler, Frieda
2002-01-01
Uses data from the National Survey of Families and Households to test the generality of the links between parenting practices and child outcomes. Parents' reports of support, monitoring, and harsh punishment were associated in the expected direction with parents' reports of children's adjustment, school grades, and behavior problems, and with…
14 CFR Appendix - Example of SIFL Adjustment
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Example of SIFL Adjustment Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) POLICY STATEMENTS STATEMENTS OF GENERAL POLICY Policies Relating to Rates and Tariffs Treatment of deferred Federal income taxes for rate purposes. Pt. 399, Subpt....
Drift tube suspension for high intensity linear accelerators
Liska, D.J.; Schamaun, R.G.; Clark, D.C.; Potter, R.C.; Frank, J.A.
1980-03-11
The disclosure relates to a drift tube suspension for high intensity linear accelerators. The system comprises a series of box-sections girders independently adjustably mounted on a linear accelerator. A plurality of drift tube holding stems are individually adjustably mounted on each girder.
Drift tube suspension for high intensity linear accelerators
Liska, Donald J.; Schamaun, Roger G.; Clark, Donald C.; Potter, R. Christopher; Frank, Joseph A.
1982-01-01
The disclosure relates to a drift tube suspension for high intensity linear accelerators. The system comprises a series of box-sections girders independently adjustably mounted on a linear accelerator. A plurality of drift tube holding stems are individually adjustably mounted on each girder.
Role of Osmotic Adjustment in Plant Productivity
Gebre, G.M.
2001-01-11
clones (P. trichocurpa Torr. & Gray x P: deltoides Bartr., TD and P. deltoides x P. nigra L., DN), we determined the TD clone, which was more productive during the first three years, had slightly lower osmotic potential than the DN clone, and also indicated a small osmotic adjustment compared with the DN hybrid. However, the productivity differences were negligible by the fifth growing season. In a separate study with several P. deltoides clones, we did not observe a consistent relationship between growth and osmotic adjustment. Some clones that had low osmotic potential and osmotic adjustment were as productive as another clone that had high osmotic potential. The least productive clone also had low osmotic potential and osmotic adjustment. The absence of a correlation may have been partly due to the fact that all clones were capable of osmotic adjustment and had low osmotic potential. In a study involving an inbred three-generation TD F{sub 2} pedigree (family 331), we did not observe a correlation between relative growth rate and osmotic potential or osmotic adjustment. However, when clones that exhibited osmotic adjustment were analyzed, there was a negative correlation between growth and osmotic potential, indicating clones with lower osmotic potential were more productive. This was observed only in clones that were exposed to drought. Although the absolute osmotic potential varied by growing environment, the relative ranking among progenies remains generally the same, suggesting that osmotic potential is genetically controlled. We have identified a quantitative trait locus for osmotic potential in another three-generation TD F{sub 2} pedigree (family 822). Unlike the many studies in agricultural crops, most of the forest tree studies were not based on plants exposed to severe stress to determine the role of osmotic adjustment. Future studies should consider using clones that are known to be productive but have contrasting osmotic adjustment capability as well as
New Parents' Psychological Adjustment and Trajectories of Early Parental Involvement.
Jia, Rongfang; Kotila, Letitia E; Schoppe-Sullivan, Sarah J; Kamp Dush, Claire M
2016-02-01
Trajectories of parental involvement time (engagement and child care) across 3, 6, and 9 months postpartum and associations with parents' own and their partners' psychological adjustment (dysphoria, anxiety, and empathic personal distress) were examined using a sample of dual-earner couples experiencing first-time parenthood (N = 182 couples). Using time diary measures that captured intensive parenting moments, hierarchical linear modeling analyses revealed that patterns of associations between psychological adjustment and parental involvement time depended on the parenting domain, aspect of psychological adjustment, and parent gender. Psychological adjustment difficulties tended to bias the 2-parent system toward a gendered pattern of "mother step in" and "father step out," as father involvement tended to decrease, and mother involvement either remained unchanged or increased, in response to their own and their partners' psychological adjustment difficulties. In contrast, few significant effects were found in models using parental involvement to predict psychological adjustment.
New Parents’ Psychological Adjustment and Trajectories of Early Parental Involvement
Jia, Rongfang; Kotila, Letitia E.; Schoppe-Sullivan, Sarah J.; Kamp Dush, Claire M.
2016-01-01
Trajectories of parental involvement time (engagement and child care) across 3, 6, and 9 months postpartum and associations with parents’ own and their partners’ psychological adjustment (dysphoria, anxiety, and empathic personal distress) were examined using a sample of dual-earner couples experiencing first-time parenthood (N = 182 couples). Using time diary measures that captured intensive parenting moments, hierarchical linear modeling analyses revealed that patterns of associations between psychological adjustment and parental involvement time depended on the parenting domain, aspect of psychological adjustment, and parent gender. Psychological adjustment difficulties tended to bias the 2-parent system toward a gendered pattern of “mother step in” and “father step out,” as father involvement tended to decrease, and mother involvement either remained unchanged or increased, in response to their own and their partners’ psychological adjustment difficulties. In contrast, few significant effects were found in models using parental involvement to predict psychological adjustment. PMID:27397935
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1994-01-01
Local characteristics of fabrics varied to suit special applications. Adjustable reed machinery proposed for use in weaving fabrics in various net shapes, widths, yarn spacings, and yarn angles. Locations of edges of fabric and configuration of warp and filling yarns varied along fabric to obtain specified properties. In machinery, reed wires mounted in groups on sliders, mounted on lengthwise rails in reed frame. Mechanisms incorporated to move sliders lengthwise, parallel to warp yarns, by sliding them along rails; move sliders crosswise by translating reed frame rails perpendicular to warp yarns; and crosswise by spreading reed rails within group. Profile of reed wires in group on each slider changed.
Continuously adjustable Pulfrich spectacles
NASA Astrophysics Data System (ADS)
Jacobs, Ken; Karpf, Ron
2011-03-01
A number of Pulfrich 3-D movies and TV shows have been produced, but the standard implementation has inherent drawbacks. The movie and TV industries have correctly concluded that the standard Pulfrich 3-D implementation is not a useful 3-D technique. Continuously Adjustable Pulfrich Spectacles (CAPS) is a new implementation of the Pulfrich effect that allows any scene containing movement in a standard 2-D movie, which are most scenes, to be optionally viewed in 3-D using inexpensive viewing specs. Recent scientific results in the fields of human perception, optoelectronics, video compression and video format conversion are translated into a new implementation of Pulfrich 3- D. CAPS uses these results to continuously adjust to the movie so that the viewing spectacles always conform to the optical density that optimizes the Pulfrich stereoscopic illusion. CAPS instantly provides 3-D immersion to any moving scene in any 2-D movie. Without the glasses, the movie will appear as a normal 2-D image. CAPS work on any viewing device, and with any distribution medium. CAPS is appropriate for viewing Internet streamed movies in 3-D.
5 CFR 9901.333 - Setting and adjusting local market supplements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Setting and adjusting local market... § 9901.333 Setting and adjusting local market supplements. (a) Standard local market supplements are set and adjusted consistent with the setting and adjusting of corresponding General Schedule...
5 CFR 9901.333 - Setting and adjusting local market supplements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Setting and adjusting local market... § 9901.333 Setting and adjusting local market supplements. (a) Standard local market supplements are set and adjusted consistent with the setting and adjusting of corresponding General Schedule...
38 CFR 10.2 - Evidence required of loss, destruction or mutilation of adjusted service certificate.
Code of Federal Regulations, 2010 CFR
2010-07-01
... an adjusted service certificate issued pursuant to the provisions of section 501 of the World War..., destruction or mutilation of adjusted service certificate. 10.2 Section 10.2 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUSTED COMPENSATION Adjusted Compensation; General §...
Measurements of Glacial Isostatic Adjustment in Greenland
NASA Astrophysics Data System (ADS)
Khan, Shfaqat Abbas; Bamber, Jonathan; Bevis, Michael; Wahr, John; van dam, Tonie; Wouters, Bert; Willis, Michael
2015-04-01
The Greenland GPS network (GNET) was constructed to provide a new means to assess viscoelastic and elastic adjustments driven by past and present-day changes in ice mass. Here we assess existing glacial isostatic adjustments (GIA) models by analysing 1995-present data from 61 continuous GPS receivers located along the edge of the Greenland ice sheet. Since GPS receivers measure both the GIA and elastic signal, we isolate the GIA signal, by removing the elastic adjustments of the crust due to present-day mass loss using high-resolution ice surface elevation change grids derived from satellite and airborne altimetry measurements (ERS1/2, ICESat, ATM, ENVISAT, and CryoSat-2). In general, our observed GIA rates contradict models, suggesting GIA models and hence their ice load history for Greenland are not well constrained.
Kinematic synthesis of adjustable robotic mechanisms
NASA Astrophysics Data System (ADS)
Chuenchom, Thatchai
1993-01-01
Conventional hard automation, such as a linkage-based or a cam-driven system, provides high speed capability and repeatability but not the flexibility required in many industrial applications. The conventional mechanisms, that are typically single-degree-of-freedom systems, are being increasingly replaced by multi-degree-of-freedom multi-actuators driven by logic controllers. Although this new trend in sophistication provides greatly enhanced flexibility, there are many instances where the flexibility needs are exaggerated and the associated complexity is unnecessary. Traditional mechanism-based hard automation, on the other hand, neither can fulfill multi-task requirements nor are cost-effective mainly due to lack of methods and tools to design-in flexibility. This dissertation attempts to bridge this technological gap by developing Adjustable Robotic Mechanisms (ARM's) or 'programmable mechanisms' as a middle ground between high speed hard automation and expensive serial jointed-arm robots. This research introduces the concept of adjustable robotic mechanisms towards cost-effective manufacturing automation. A generalized analytical synthesis technique has been developed to support the computational design of ARM's that lays the theoretical foundation for synthesis of adjustable mechanisms. The synthesis method developed in this dissertation, called generalized adjustable dyad and triad synthesis, advances the well-known Burmester theory in kinematics to a new level. While this method provides planar solutions, a novel patented scheme is utilized for converting prescribed three-dimensional motion specifications into sets of planar projections. This provides an analytical and a computational tool for designing adjustable mechanisms that satisfy multiple sets of three-dimensional motion specifications. Several design issues were addressed, including adjustable parameter identification, branching defect, and mechanical errors. An efficient mathematical scheme for
Labrada-Martagón, Vanessa; Méndez-Rodríguez, Lia C; Mangel, Marc; Zenteno-Savín, Tania
2013-09-01
Generalized linear models were fitted to evaluate the relationship between 17β-estradiol (E2), testosterone (T) and thyroxine (T4) levels in immature East Pacific green sea turtles (Chelonia mydas) and their body condition, size, mass, blood biochemistry parameters, handling time, year, season and site of capture. According to external (tail size) and morphological (<77.3 straight carapace length) characteristics, 95% of the individuals were juveniles. Hormone levels, assessed on sea turtles subjected to a capture stress protocol, were <34.7nmolTL(-1), <532.3pmolE2 L(-1) and <43.8nmolT4L(-1). The statistical model explained biologically plausible metabolic relationships between hormone concentrations and blood biochemistry parameters (e.g. glucose, cholesterol) and the potential effect of environmental variables (season and study site). The variables handling time and year did not contribute significantly to explain hormone levels. Differences in sex steroids between season and study sites found by the models coincided with specific nutritional, physiological and body condition differences related to the specific habitat conditions. The models correctly predicted the median levels of the measured hormones in green sea turtles, which confirms the fitted model's utility. It is suggested that quantitative predictions could be possible when the model is tested with additional data.
5 CFR 9901.322 - Setting and adjusting rate ranges.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Setting and adjusting rate ranges. 9901... NATIONAL SECURITY PERSONNEL SYSTEM (NSPS) Pay and Pay Administration Rate Ranges and General Salary Increases § 9901.322 Setting and adjusting rate ranges. (a) Subject to § 9901.105, the Secretary may set...
Gender Identity and Adjustment in Black, Hispanic, and White Preadolescents
ERIC Educational Resources Information Center
Corby, Brooke C.; Hodges, Ernest V. E.; Perry, David G.
2007-01-01
The generality of S. K. Egan and D. G. Perry's (2001) model of gender identity and adjustment was evaluated by examining associations between gender identity (felt gender typicality, felt gender contentedness, and felt pressure for gender conformity) and social adjustment in 863 White, Black, and Hispanic 5th graders (mean age = 11.1 years).…
78 FR 56868 - Adjustment of Indemnification for Inflation
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-16
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Adjustment of Indemnification for Inflation AGENCY: Office of General Counsel, U.S Department of Energy. ACTION: Notice of adjusted indemnification amount. SUMMARY: The Department of Energy (DOE) is...
Preadolescent Friendship and Peer Rejection as Predictors of Adult Adjustment.
ERIC Educational Resources Information Center
Bagwell, Catherine L.; Newcomb, Andrew F.; Bukowski, William M.
1998-01-01
Compared adjustment of 30 young adults who had a stable, reciprocal best friend in fifth grade and 30 who did not. Found that lower peer rejection uniquely predicted overall life status adjustment. Friended preadolescents had higher general self-worth in adulthood, even after controlling for perceived preadolescence competence. Peer rejection and…
75 FR 71069 - Adopted Adjustments to Alternative Site Framework
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... Foreign-Trade Zones Board Adopted Adjustments to Alternative Site Framework SUMMARY: The Foreign-Trade Zones (FTZ) Board has adopted minor adjustments to its practice pertaining to the alternative site... 3987, 01/22/09) as an option for grantees to designate and manage their general-purpose FTZ sites....
New schemes in the adjustment of bendable, elliptical mirrors using a long trace profiler
Rah, S.
1997-08-01
The Long Trace Profiler (LTP), an instrument for measuring the slope profile of long X-ray mirrors, has been used for adjusting bendable mirrors. Often an elliptical profile is desired for the mirror surface, since many synchrotron applications involve imaging a point source to a point image. Several techniques have been used in the past for adjusting the profile measured in height or slope of a bendable mirror. Underwood et al. have used collimated X-rays for achieving desired surface shape for bent glass optics. Non linear curve fitting using the simplex algorithm was later used to determine the best fit ellipse to the surface under test. A more recent method uses a combination of least squares polynomial fitting to the measured slope function in order to enable rapid adjustment to the desired shape. The mirror has mechanical adjustments corresponding to the first and second order terms of the desired slope polynomial, which correspond to defocus and coma, respectively. The higher order terms are realized by shaping the width of the mirror to produce the optimal elliptical surface when bent. The difference between desired and measured surface slope profiles allows us to make methodical adjustments to the bendable mirror based on changes in the signs and magnitudes of the polynomial coefficients. This technique gives rapid convergence to the desired shape of the measured surface, even when we have no information about the bender, other than the desired shape of the optical surface. Nonlinear curve fitting can be used at the end of the process for fine adjustments, and to determine the over all best fit parameters of the surface. This technique could be generalized to other shapes such as toroids.
Combining biomarkers for classification with covariate adjustment.
Kim, Soyoung; Huang, Ying
2017-03-09
Combining multiple markers can improve classification accuracy compared with using a single marker. In practice, covariates associated with markers or disease outcome can affect the performance of a biomarker or biomarker combination in the population. The covariate-adjusted receiver operating characteristic (ROC) curve has been proposed as a tool to tease out the covariate effect in the evaluation of a single marker; this curve characterizes the classification accuracy solely because of the marker of interest. However, research on the effect of covariates on the performance of marker combinations and on how to adjust for the covariate effect when combining markers is still lacking. In this article, we examine the effect of covariates on classification performance of linear marker combinations and propose to adjust for covariates in combining markers by maximizing the nonparametric estimate of the area under the covariate-adjusted ROC curve. The proposed method provides a way to estimate the best linear biomarker combination that is robust to risk model assumptions underlying alternative regression-model-based methods. The proposed estimator is shown to be consistent and asymptotically normally distributed. We conduct simulations to evaluate the performance of our estimator in cohort and case/control designs and compare several different weighting strategies during estimation with respect to efficiency. Our estimator is also compared with alternative regression-model-based estimators or estimators that maximize the empirical area under the ROC curve, with respect to bias and efficiency. We apply the proposed method to a biomarker study from an human immunodeficiency virus vaccine trial. Copyright © 2017 John Wiley & Sons, Ltd.
This Infographic shows the National Cancer Institute SEER Incidence Trends. The graphs show the Average Annual Percent Change (AAPC) 2002-2011. For Men, Thyroid: 5.3*,Liver & IBD: 3.6*, Melanoma: 2.3*, Kidney: 2.0*, Myeloma: 1.9*, Pancreas: 1.2*, Leukemia: 0.9*, Oral Cavity: 0.5, Non-Hodgkin Lymphoma: 0.3*, Esophagus: -0.1, Brain & ONS: -0.2*, Bladder: -0.6*, All Sites: -1.1*, Stomach: -1.7*, Larynx: -1.9*, Prostate: -2.1*, Lung & Bronchus: -2.4*, and Colon & Rectum: -3/0*. For Women, Thyroid: 5.8*, Liver & IBD: 2.9*, Myeloma: 1.8*, Kidney: 1.6*, Melanoma: 1.5, Corpus & Uterus: 1.3*, Pancreas: 1.1*, Leukemia: 0.6*, Brain & ONS: 0, Non-Hodgkin Lymphoma: -0.1, All Sites: -0.1, Breast: -0.3, Stomach: -0.7*, Oral Cavity: -0.7*, Bladder: -0.9*, Ovary: -0.9*, Lung & Bronchus: -1.0*, Cervix: -2.4*, and Colon & Rectum: -2.7*. * AAPC is significantly different from zero (p<.05). Rates were adjusted for reporting delay in the registry. www.cancer.gov Source: Special section of the Annual Report to the Nation on the Status of Cancer, 1975-2011.
Elliptically polarizing adjustable phase insertion device
Carr, Roger
1995-01-01
An insertion device for extracting polarized electromagnetic energy from a beam of particles is disclosed. The insertion device includes four linear arrays of magnets which are aligned with the particle beam. The magnetic field strength to which the particles are subjected is adjusted by altering the relative alignment of the arrays in a direction parallel to that of the particle beam. Both the energy and polarization of the extracted energy may be varied by moving the relevant arrays parallel to the beam direction. The present invention requires a substantially simpler and more economical superstructure than insertion devices in which the magnetic field strength is altered by changing the gap between arrays of magnets.
44 CFR 13.51 - Later disallowances and adjustments.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF HOMELAND SECURITY GENERAL UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND COOPERATIVE... adjustments. The closeout of a grant does not affect: (a) The Federal agency's right to disallow costs...
37 CFR 1.705 - Patent term adjustment determination.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent Term... by: (1) The fee set forth in § 1.18(f); and (2) A showing to the satisfaction of the Director...
37 CFR 1.705 - Patent term adjustment determination.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent Term... by: (1) The fee set forth in § 1.18(f); and (2) A showing to the satisfaction of the Director...
37 CFR 1.705 - Patent term adjustment determination.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent Term... showing to the satisfaction of the Director that, in spite of all due care, the applicant was unable...
37 CFR 1.705 - Patent term adjustment determination.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent Term... showing to the satisfaction of the Director that, in spite of all due care, the applicant was unable...
Hernández Suárez, Marcos; Astray Dopazo, Gonzalo; Larios López, Dina; Espinosa, Francisco
2015-01-01
There are a large number of tomato cultivars with a wide range of morphological, chemical, nutritional and sensorial characteristics. Many factors are known to affect the nutrient content of tomato cultivars. A complete understanding of the effect of these factors would require an exhaustive experimental design, multidisciplinary scientific approach and a suitable statistical method. Some multivariate analytical techniques such as Principal Component Analysis (PCA) or Factor Analysis (FA) have been widely applied in order to search for patterns in the behaviour and reduce the dimensionality of a data set by a new set of uncorrelated latent variables. However, in some cases it is not useful to replace the original variables with these latent variables. In this study, Automatic Interaction Detection (AID) algorithm and Artificial Neural Network (ANN) models were applied as alternative to the PCA, AF and other multivariate analytical techniques in order to identify the relevant phytochemical constituents for characterization and authentication of tomatoes. To prove the feasibility of AID algorithm and ANN models to achieve the purpose of this study, both methods were applied on a data set with twenty five chemical parameters analysed on 167 tomato samples from Tenerife (Spain). Each tomato sample was defined by three factors: cultivar, agricultural practice and harvest date. General Linear Model linked to AID (GLM-AID) tree-structured was organized into 3 levels according to the number of factors. p-Coumaric acid was the compound the allowed to distinguish the tomato samples according to the day of harvest. More than one chemical parameter was necessary to distinguish among different agricultural practices and among the tomato cultivars. Several ANN models, with 25 and 10 input variables, for the prediction of cultivar, agricultural practice and harvest date, were developed. Finally, the models with 10 input variables were chosen with fit’s goodness between 44 and
Hernández Suárez, Marcos; Astray Dopazo, Gonzalo; Larios López, Dina; Espinosa, Francisco
2015-01-01
There are a large number of tomato cultivars with a wide range of morphological, chemical, nutritional and sensorial characteristics. Many factors are known to affect the nutrient content of tomato cultivars. A complete understanding of the effect of these factors would require an exhaustive experimental design, multidisciplinary scientific approach and a suitable statistical method. Some multivariate analytical techniques such as Principal Component Analysis (PCA) or Factor Analysis (FA) have been widely applied in order to search for patterns in the behaviour and reduce the dimensionality of a data set by a new set of uncorrelated latent variables. However, in some cases it is not useful to replace the original variables with these latent variables. In this study, Automatic Interaction Detection (AID) algorithm and Artificial Neural Network (ANN) models were applied as alternative to the PCA, AF and other multivariate analytical techniques in order to identify the relevant phytochemical constituents for characterization and authentication of tomatoes. To prove the feasibility of AID algorithm and ANN models to achieve the purpose of this study, both methods were applied on a data set with twenty five chemical parameters analysed on 167 tomato samples from Tenerife (Spain). Each tomato sample was defined by three factors: cultivar, agricultural practice and harvest date. General Linear Model linked to AID (GLM-AID) tree-structured was organized into 3 levels according to the number of factors. p-Coumaric acid was the compound the allowed to distinguish the tomato samples according to the day of harvest. More than one chemical parameter was necessary to distinguish among different agricultural practices and among the tomato cultivars. Several ANN models, with 25 and 10 input variables, for the prediction of cultivar, agricultural practice and harvest date, were developed. Finally, the models with 10 input variables were chosen with fit's goodness between 44 and 100
Rothman, Steven I; Rothman, Michael J; Solinger, Alan B
2013-01-01
Objective To explore the hypothesis that placing clinical variables of differing metrics on a common linear scale of all-cause postdischarge mortality provides risk functions that are directly correlated with in-hospital mortality risk. Design Modelling study. Setting An 805-bed community hospital in the southeastern USA. Participants 42302 inpatients admitted for any reason, excluding obstetrics, paediatric and psychiatric patients. Outcome measures All-cause in-hospital and postdischarge mortalities, and associated correlations. Results Pearson correlation coefficients comparing in-hospital risks with postdischarge risks for creatinine, heart rate and a set of 12 nursing assessments are 0.920, 0.922 and 0.892, respectively. Correlation between postdischarge risk heart rate and the Modified Early Warning System (MEWS) component for heart rate is 0.855. The minimal excess risk values for creatinine and heart rate roughly correspond to the normal reference ranges. We also provide the risks for values outside that range, independent of expert opinion or a regression model. By summing risk functions, a first-approximation patient risk score is created, which correctly ranks 6 discharge categories by average mortality with p<0.001 for differences in category means, and Tukey's Honestly Significant Difference Test confirmed that the means were all different at the 95% confidence level. Conclusions Quantitative or categorical clinical variables can be transformed into risk functions that correlate well with in-hospital risk. This methodology provides an empirical way to assess inpatient risk from data available in the Electronic Health Record. With just the variables in this paper, we achieve a risk score that correlates with discharge disposition. This is the first step towards creation of a universal measure of patient condition that reflects a generally applicable set of health-related risks. More importantly, we believe that our approach opens the door to a way of
39 CFR 3010.14 - Contents of notice of rate adjustment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... DOMINANT PRODUCTS Rules for Rate Adjustments for Rates of General Applicability (Type 1-A and 1-B Rate... costs; (7) A discussion that demonstrates how the planned rate adjustments are designed to help...
Sparse linear programming subprogram
Hanson, R.J.; Hiebert, K.L.
1981-12-01
This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.
Psychological distress, personality, and adjustment among nursing students.
Warbah, L; Sathiyaseelan, M; Vijayakumar, C; Vasantharaj, B; Russell, S; Jacob, K S
2007-08-01
Psychological distress and poor adjustment among a significant number of nursing students is an important issue facing nursing education. The concerns need to be studied in detail and solutions need to be built into the nursing course in order to help students with such difficulty. This study used a cross-sectional survey design to study psychological distress, personality and adjustment among nursing students attending the College of Nursing, Christian Medical College, Vellore, India. One hundred and forty five nursing students were assessed using the General Health Questionnaire 12, the Eysenck Personality Questionnaire, and the Bell's Adjustment Inventory to investigate psychological distress, personality profile and adjustment, respectively. Thirty participants (20.7%) of the 145 students assessed reported high scores on the General Health Questionnaire. Psychological distress was significantly associated with having neurotic personality and adjustment difficulties in different areas of functioning.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Remote control for anode-cathode adjustment
Roose, Lars D.
1991-01-01
An apparatus for remotely adjusting the anode-cathode gap in a pulse power machine has an electric motor located within a hollow cathode inside the vacuum chamber of the pulse power machine. Input information for controlling the motor for adjusting the anode-cathode gap is fed into the apparatus using optical waveguides. The motor, controlled by the input information, drives a worm gear that moves a cathode tip. When the motor drives in one rotational direction, the cathode is moved toward the anode and the size of the anode-cathode gap is diminished. When the motor drives in the other direction, the cathode is moved away from the anode and the size of the anode-cathode gap is increased. The motor is powered by batteries housed in the hollow cathode. The batteries may be rechargeable, and they may be recharged by a photovoltaic cell in combination with an optical waveguide that receives recharging energy from outside the hollow cathode. Alternatively, the anode-cathode gap can be remotely adjusted by a manually-turned handle connected to mechanical linkage which is connected to a jack assembly. The jack assembly converts rotational motion of the handle and mechanical linkage to linear motion of the cathode moving toward or away from the anode.
Generalized Fibonacci photon sieves.
Ke, Jie; Zhang, Junyong
2015-08-20
We successfully extend the standard Fibonacci zone plates with two on-axis foci to the generalized Fibonacci photon sieves (GFiPS) with multiple on-axis foci. We also propose the direct and inverse design methods based on the characteristic roots of the recursion relation of the generalized Fibonacci sequences. By switching the transparent and opaque zones, according to the generalized Fibonacci sequences, we not only realize adjustable multifocal distances but also fulfill the adjustable compression ratio of focal spots in different directions.
Spousal Adjustment to Myocardial Infarction.
ERIC Educational Resources Information Center
Ziglar, Elisa J.
This paper reviews the literature on the stresses and coping strategies of spouses of patients with myocardial infarction (MI). It attempts to identify specific problem areas of adjustment for the spouse and to explore the effects of spousal adjustment on patient recovery. Chapter one provides an overview of the importance in examining the…
Adjusting to change: linking family structure transitions with parenting and boys' adjustment.
Martinez, Charles R; Forgatch, Marion S
2002-06-01
This study examined links between family structure transitions and children's academic, behavioral, and emotional outcomes in a sample of 238 divorcing mothers and their sons in Grades 1-3. Multiple methods and agents were used in assessing family process variables and child outcomes. Findings suggest that greater accumulations of family transitions were associated with poorer academic functioning, greater acting-out behavior, and worse emotional adjustment for boys. However, in all three cases, these relationships were mediated by parenting practices: Parental academic skill encouragement mediated the relationship between transitions and academic functioning, and a factor of more general effective parenting practices mediated the relationships between transitions and acting out and emotional adjustment.
NASA Astrophysics Data System (ADS)
Yamasaki, Tadashi; Houseman, Gregory; Hamling, Ian; Postek, Elek
2010-05-01
We have developed a new parallelized 3-D numerical code, OREGANO_VE, for the solution of the general visco-elastic problem in a rectangular block domain. The mechanical equilibrium equation is solved using the finite element method for a (non-)linear Maxwell visco-elastic rheology. Time-dependent displacement and/or traction boundary conditions can be applied. Matrix assembly is based on a tetrahedral element defined by 4 vertex nodes and 6 nodes located at the midpoints of the edges, and within which displacement is described by a quadratic interpolation function. For evaluating viscoelastic relaxation, an explicit time-stepping algorithm (Zienkiewicz and Cormeau, Int. J. Num. Meth. Eng., 8, 821-845, 1974) is employed. We test the accurate implementation of the OREGANO_VE by comparing numerical and analytic (or semi-analytic half-space) solutions to different problems in a range of applications: (1) equilibration of stress in a constant density layer after gravity is switched on at t = 0 tests the implementation of spatially variable viscosity and non-Newtonian viscosity; (2) displacement of the welded interface between two blocks of differing viscosity tests the implementation of viscosity discontinuities, (3) displacement of the upper surface of a layer under applied normal load tests the implementation of time-dependent surface tractions (4) visco-elastic response to dyke intrusion (compared with the solution in a half-space) tests the implementation of all aspects. In each case, the accuracy of the code is validated subject to use of a sufficiently small time step, providing assurance that the OREGANO_VE code can be applied to a range of visco-elastic relaxation processes in three dimensions, including post-seismic deformation and post-glacial uplift. The OREGANO_VE code includes a capability for representation of prescribed fault slip on an internal fault. The surface displacement associated with large earthquakes can be detected by some geodetic observations
Lamb's Hydrostatic Adjustment for Heating of Finite Duration.
NASA Astrophysics Data System (ADS)
Sotack, Timothy; Bannon, Peter R.
1999-01-01
Lamb's hydrostatic adjustment problem for the linear response of an infinite, isothermal atmosphere to an instantaneous heating of infinite horizontal extent is generalized to include the effects of heating of finite duration. Three different time sequences of the heating are considered: a top hat, a sine, and a sine-squared heating. The transient solution indicates that heating of finite duration generates broader but weaker acoustic wave fronts. However, it is shown that the final equilibrium is the same regardless of the heating sequence provided the net heating is the same.A Lagrangian formulation provides a simple interpretation of the adjustment. The heating generates an entropy anomaly that is initially realized completely as a pressure excess with no density perturbation. In the final state the entropy anomaly is realized as a density deficit with no pressure perturbation. Energetically the heating generates both available potential energy and available elastic energy. The former remains in the heated layer while the latter is carried off by the acoustic waves.The wave energy generation is compared for the various heating sequences. In the instantaneous case, 28.6% of the total energy generation is carried off by waves. This fraction is the ratio of the ideal gas constant R to the specific heat at constant pressure cp. For the heatings of finite duration considered, the amount of wave energy decreases monotonically as the heating duration increases and as the heating thickness decreases. The wave energy generation approaches zero when (i) the duration of the heating is comparable to or larger than the acoustic cutoff period, 2/NA 300 s, and (ii) the thickness of the heated layer approaches zero. The maximum wave energy occurs for a thick layer of heating of small duration and is the same as that for the instantaneous case.The effect of a lower boundary is also considered.
Precision Adjustable Liquid Regulator (ALR)
NASA Astrophysics Data System (ADS)
Meinhold, R.; Parker, M.
2004-10-01
A passive mechanical regulator has been developed for the control of fuel or oxidizer flow to a 450N class bipropellant engine for use on commercial and interplanetary spacecraft. There are several potential benefits to the propulsion system, depending on mission requirements and spacecraft design. This system design enables more precise control of main engine mixture ratio and inlet pressure, and simplifies the pressurization system by transferring the function of main engine flow rate control from the pressurization/propellant tank assemblies, to a single component, the ALR. This design can also reduce the thermal control requirements on the propellant tanks, avoid costly Qualification testing of biprop engines for missions with more stringent requirements, and reduce the overall propulsion system mass and power usage. In order to realize these benefits, the ALR must meet stringent design requirements. The main advantage of this regulator over other units available in the market is that it can regulate about its nominal set point to within +/-0.85%, and change its regulation set point in flight +/-4% about that nominal point. The set point change is handled actively via a stepper motor driven actuator, which converts rotary into linear motion to affect the spring preload acting on the regulator. Once adjusted to a particular set point, the actuator remains in its final position unpowered, and the regulator passively maintains outlet pressure. The very precise outlet regulation pressure is possible due to new technology developed by Moog, Inc. which reduces typical regulator mechanical hysteresis to near zero. The ALR requirements specified an outlet pressure set point range from 225 to 255 psi, and equivalent water flow rates required were in the 0.17 lb/sec range. The regulation output pressure is maintained at +/-2 psi about the set point from a P (delta or differential pressure) of 20 to over 100 psid. Maximum upstream system pressure was specified at 320 psi
26 CFR 1.481-1 - Adjustments in general.
Code of Federal Regulations, 2010 CFR
2010-04-01
... receivable, accounts payable, and any other item determined to be necessary in order to prevent amounts from... computing taxable income for the taxable year of the change, there shall be taken into account those... be based on amounts which were taken into account in computing income (or which should have...
Family Environments, Specific Relationships, and General Perceptions of Adjustment.
ERIC Educational Resources Information Center
Gurung, Regan A. R.; And Others
Current family relationships not only form an important part of most people's social networks but also influence global perceptions of social support. Using multiple regression techniques, this study investigated the roles of students' perceptions of their family environment and the quality of specific student-parent relationships in predicting…
Romera, Eva M.; Gómez-Ortiz, Olga; Ortega-Ruiz, Rosario
2016-01-01
There is extensive scientific evidence of the serious psychological and social effects that peer victimization may have on students, among them internalizing problems such as anxiety or negative self-esteem, difficulties related to low self-efficacy and lower levels of social adjustment. Although a direct relationship has been observed between victimization and these effects, it has not yet been analyzed whether there is a relationship of interdependence between all these measures of psychosocial adjustment. The aim of this study was to examine the relationship between victimization and difficulties related to social adjustment among high school students. To do so, various explanatory models were tested to determine whether psychological adjustment (negative self-esteem, social anxiety and social self-efficacy) could play a mediating role in this relationship, as suggested by other studies on academic adjustment. The sample comprised 2060 Spanish high school students (47.9% girls; mean age = 14.34). The instruments used were the scale of victimization from European Bullying Intervention Project Questionnaire, the negative scale from Rosenberg Self-Esteem Scale, Social Anxiety Scale for Adolescents and a general item about social self-efficacy, all of them self-reports. Structural equation modeling was used to analyze the data. The results confirmed the partial mediating role of negative self-esteem, social anxiety and social self-efficacy between peer victimization and social adjustment and highlight the importance of empowering victimized students to improve their self-esteem and self-efficacy and prevent social anxiety. Such problems lead to the avoidance of social interactions and social reinforcement, thus making it difficult for these students to achieve adequate social adjustment. PMID:27891108
Patterns, Quantities, and Linear Functions
ERIC Educational Resources Information Center
Ellis, Amy B.
2009-01-01
Pattern generalization and a focus on quantities are important aspects of algebraic reasoning. This article describes two different approaches to teaching and learning linear functions for middle school students. One group focused on patterns in number tables, and the other group worked primarily with real-world quantities. This article highlights…
Adjustable Induction-Heating Coil
NASA Technical Reports Server (NTRS)
Ellis, Rod; Bartolotta, Paul
1990-01-01
Improved design for induction-heating work coil facilitates optimization of heating in different metal specimens. Three segments adjusted independently to obtain desired distribution of temperature. Reduces time needed to achieve required temperature profiles.
Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar
2016-08-15
Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd.
21 CFR 880.5120 - Manual adjustable hospital bed.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Manual adjustable hospital bed. 880.5120 Section 880.5120 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES GENERAL HOSPITAL AND PERSONAL USE DEVICES General Hospital and Personal...
7 CFR 400.405 - Agent and loss adjuster responsibilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 6 2013-01-01 2013-01-01 false Agent and loss adjuster responsibilities. 400.405 Section 400.405 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS General...
7 CFR 400.405 - Agent and loss adjuster responsibilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 6 2012-01-01 2012-01-01 false Agent and loss adjuster responsibilities. 400.405 Section 400.405 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS General...
7 CFR 400.405 - Agent and loss adjuster responsibilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 6 2014-01-01 2014-01-01 false Agent and loss adjuster responsibilities. 400.405 Section 400.405 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS General...
7 CFR 400.405 - Agent and loss adjuster responsibilities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 6 2011-01-01 2011-01-01 false Agent and loss adjuster responsibilities. 400.405 Section 400.405 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS General...
7 CFR 400.405 - Agent and loss adjuster responsibilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Agent and loss adjuster responsibilities. 400.405 Section 400.405 Agriculture Regulations of the Department of Agriculture (Continued) FEDERAL CROP INSURANCE CORPORATION, DEPARTMENT OF AGRICULTURE GENERAL ADMINISTRATIVE REGULATIONS General...
Dynamic adjustment of hidden node parameters for extreme learning machine.
Feng, Guorui; Lan, Yuan; Zhang, Xinpeng; Qian, Zhenxing
2015-02-01
Extreme learning machine (ELM), proposed by Huang et al., was developed for generalized single hidden layer feedforward networks with a wide variety of hidden nodes. ELMs have been proved very fast and effective especially for solving function approximation problems with a predetermined network structure. However, it may contain insignificant hidden nodes. In this paper, we propose dynamic adjustment ELM (DA-ELM) that can further tune the input parameters of insignificant hidden nodes in order to reduce the residual error. It is proved in this paper that the energy error can be effectively reduced by applying recursive expectation-minimization theorem. In DA-ELM, the input parameters of insignificant hidden node are updated in the decreasing direction of the energy error in each step. The detailed theoretical foundation of DA-ELM is presented in this paper. Experimental results show that the proposed DA-ELM is more efficient than the state-of-art algorithms such as Bayesian ELM, optimally-pruned ELM, two-stage ELM, Levenberg-Marquardt, sensitivity-based linear learning method as well as the preliminary ELM.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Linearly Forced Isotropic Turbulence
NASA Technical Reports Server (NTRS)
Lundgren, T. S.
2003-01-01
Stationary isotropic turbulence is often studied numerically by adding a forcing term to the Navier-Stokes equation. This is usually done for the purpose of achieving higher Reynolds number and longer statistics than is possible for isotropic decaying turbulence. It is generally accepted that forcing the Navier-Stokes equation at low wave number does not influence the small scale statistics of the flow provided that there is wide separation between the largest and smallest scales. It will be shown, however, that the spectral width of the forcing has a noticeable effect on inertial range statistics. A case will be made here for using a broader form of forcing in order to compare computed isotropic stationary turbulence with (decaying) grid turbulence. It is shown that using a forcing function which is directly proportional to the velocity has physical meaning and gives results which are closer to both homogeneous and non-homogeneous turbulence. Section 1 presents a four part series of motivations for linear forcing. Section 2 puts linear forcing to a numerical test with a pseudospectral computation.
Newman, Gregory A.; Commer, Michael
2006-11-17
Software that simulates and inverts electromagnetic field data for subsurface electrical properties (electrical conductivity) of geological media. The software treats data produced by a time harmonic source field excitation arising from the following antenna geometery: loops and grounded bipoles, as well as point electric and magnetic dioples. The inversion process is carried out using a non-linear conjugate gradient optimization scheme, which minimizes the misfit between field data and model data using a least squares criteria. The software is an upgrade from the code NLCGCS_MP ver 1.0. The upgrade includes the following components: Incorporation of new 1 D field sourcing routines to more accurately simulate the 3D electromagnetic field for arbitrary geologic& media, treatment for generalized finite length transmitting antenna geometry (antennas with vertical and horizontal component directions). In addition, the software has been upgraded to treat transverse anisotropy in electrical conductivity.
ERIC Educational Resources Information Center
DuBois, David L.; Burk-Braxton, Carol; Swenson, Lance P.; Tevendale, Heather D.; Hardesty, Jennifer L.
2002-01-01
Investigated the influence of racial and gender discrimination and difficulties on adolescent adjustment. Found that discrimination and hassles contribute to a general stress context which in turn influences emotional and behavioral problems in adjustment, while racial and gender identity positively affect self-esteem and thus adjustment. Revealed…
17 CFR 143.8 - Inflation-adjusted civil monetary penalties.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Inflation-adjusted civil... JURISDICTION General Provisions § 143.8 Inflation-adjusted civil monetary penalties. (a) Unless otherwise amended by an act of Congress, the inflation-adjusted maximum civil monetary penalty for each violation...
17 CFR 143.8 - Inflation-adjusted civil monetary penalties.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Inflation-adjusted civil... JURISDICTION General Provisions § 143.8 Inflation-adjusted civil monetary penalties. (a) Unless otherwise amended by an act of Congress, the inflation-adjusted maximum civil monetary penalty for each violation...
26 CFR 1.9001-4 - Adjustments required in computing excess-profits credit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 13 2010-04-01 2010-04-01 false Adjustments required in computing excess... Adjustments required in computing excess-profits credit. (a) In general. Subsection (f) of the Act provides adjustments required to be made in computing the excess-profits credit for any taxable year under the...
Permanent multipole magnets with adjustable strength
Halbach, K.
1983-03-01
Preceded by a short discussion of the motives for using permanent magnets in accelerators, a new type of permanent magnet for use in accelerators is presented. The basic design and most important properties of a quadrupole will be described that uses both steel and permanent magnet material. The field gradient produced by this magnet can be adjusted without changing any other aspect of the field produced by this quadrupole. The generalization of this concept to produce other multipole fields, or combination of multipole fields, will also be presented.
Elliptically polarizing adjustable phase insertion device
Carr, R.
1995-01-17
An insertion device for extracting polarized electromagnetic energy from a beam of particles is disclosed. The insertion device includes four linear arrays of magnets which are aligned with the particle beam. The magnetic field strength to which the particles are subjected is adjusted by altering the relative alignment of the arrays in a direction parallel to that of the particle beam. Both the energy and polarization of the extracted energy may be varied by moving the relevant arrays parallel to the beam direction. The present invention requires a substantially simpler and more economical superstructure than insertion devices in which the magnetic field strength is altered by changing the gap between arrays of magnets. 3 figures.
Why quantum dynamics is linear
NASA Astrophysics Data System (ADS)
Jordan, Thomas F.
2009-11-01
A seed George planted 45 years ago is still producing fruit now. In 1961, George set out the fundamental proposition that quantum dynamics is described most generally by linear maps of density matrices. Since the first sprout from George's seed appeared in 1962, we have known that George's fundamental proposition can be used to derive the linear Schrodinger equation in cases where it can be expected to apply. Now we have a proof of George's proposition that density matrices are mapped linearly to density matrices, that there can be no nonlinear generalization of this. That completes the derivation of the linear Schrodinger equation. The proof of George's proposition replaces Wigner's theorem that a symmetry transformation is represented by a linear or antilinear operator. The assumption needed to prove George's proposition is just that the dynamics does not depend on anything outside the system but must allow the system to be described as part of a larger system. This replaces the physically less compelling assumption of Wigner's theorem that absolute values of inner products are preserved. The history of this question is reviewed. Nonlinear generalizations of quantum mechanics have been proposed. They predict small but clear nonlinear effects, which very accurate experiments have not seen. This begs the question. Is there a reason in principle why nonlinearity is not found? Is it impossible? Does quantum dynamics have to be linear? Attempts to prove this have not been decisive, because either their assumptions are not compelling or their arguments are not conclusive. The question has been left unsettled. The simple answer, based on a simple assumption, was found in two steps separated by 44 years.
Adjusting to Chronic Health Conditions.
Helgeson, Vicki S; Zajdel, Melissa
2017-01-03
Research on adjustment to chronic disease is critical in today's world, in which people are living longer lives, but lives are increasingly likely to be characterized by one or more chronic illnesses. Chronic illnesses may deteriorate, enter remission, or fluctuate, but their defining characteristic is that they persist. In this review, we first examine the effects of chronic disease on one's sense of self. Then we review categories of factors that influence how one adjusts to chronic illness, with particular emphasis on the impact of these factors on functional status and psychosocial adjustment. We begin with contextual factors, including demographic variables such as sex and race, as well as illness dimensions such as stigma and illness identity. We then examine a set of dispositional factors that influence chronic illness adjustment, organizing these into resilience and vulnerability factors. Resilience factors include cognitive adaptation indicators, personality variables, and benefit-finding. Vulnerability factors include a pessimistic attributional style, negative gender-related traits, and rumination. We then turn to social environmental variables, including both supportive and unsupportive interactions. Finally, we review chronic illness adjustment within the context of dyadic coping. We conclude by examining potential interactions among these classes of variables and outlining a set of directions for future research.
ADJUSTED FIELD PROFILE FOR THE CHROMATICITY CANCELLATION IN FFAG ACCELERATORS.
RUGGIERO, A.G.
2004-10-13
In an earlier report they have reviewed four major rules to design the lattice of Fixed-Field Alternating-Gradient (FFAG) accelerators. One of these rules deals with the search of the Adjusted Field Profile, that is the field non-linear distribution along the length and the width of the accelerator magnets, to compensate for the chromatic behavior, and thus to reduce considerably the variation of betatron tunes during acceleration over a large momentum range. The present report defines the method for the search of the Adjusted Field Profile.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm.
MCCB warm adjustment testing concept
NASA Astrophysics Data System (ADS)
Erdei, Z.; Horgos, M.; Grib, A.; Preradović, D. M.; Rodic, V.
2016-08-01
This paper presents an experimental investigation in to operating of thermal protection device behavior from an MCCB (Molded Case Circuit Breaker). One of the main functions of the circuit breaker is to assure protection for the circuits where mounted in for possible overloads of the circuit. The tripping mechanism for the overload protection is based on a bimetal movement during a specific time frame. This movement needs to be controlled and as a solution to control this movement we choose the warm adjustment concept. This concept is meant to improve process capability control and final output. The warm adjustment device design will create a unique adjustment of the bimetal position for each individual breaker, determined when the testing current will flow thru a phase which needs to trip in a certain amount of time. This time is predetermined due to scientific calculation for all standard types of amperages and complies with the IEC 60497 standard requirements.
Comparable-Worth Adjustments: Yes--Comparable-Worth Adjustments: No.
ERIC Educational Resources Information Center
Galloway, Sue; O'Neill, June
1985-01-01
Two essays address the issue of pay equity and present opinions favoring and opposing comparable-worth adjustments. Movement of women out of traditionally female jobs, the limits of "equal pay," fairness of comparable worth and market-based wages, implementation and efficiency of comparable worth system, and alternatives to comparable…
Chen, Lihua; Su, Shaobing; Li, Xiaoming; Tam, Cheuk Chi; Lin, Danhua
2014-01-01
Objectives: The global literature has revealed potential negative impacts of migration and discrimination on individual's psychological adjustments. However, the psychological adjustments among internal migrant children in developing countries are rarely assessed. This study simultaneously examines perceived discrimination and schooling arrangements in relation to psychological adjustments among rural-to-urban migrant children in China. Methods: A sample of 657 migrant children was recruited in Beijing, China. Cross-sectional associations of self-reported perceived discrimination and schooling arrangements (i.e. public school and migrant children school (MCS)) with psychological adjustment outcomes (i.e. social anxiety, depression and loneliness) were examined by general linear model. Results: (1) Compared with migrant children in public school, migrant children in MCS had lower family incomes, and their parents had received less education. (2) Migrant children in MCS reported higher levels of social anxiety, depression and loneliness than did their counterparts. Children who reported high level of perceived discrimination also reported the highest level of social anxiety, depression and loneliness. (3) Perceived discrimination had main effects on social anxiety and depression after controlling for the covariates. A significant interaction between perceived discrimination and schooling arrangements on loneliness was found. Specifically, the migrant children in MCS reported higher loneliness scores than did migrant children in public school only at low level of perceived discrimination; however, schooling arrangements was unrelated to loneliness at medium and high levels of discrimination. Conclusions: These results indicate that migration-related perceived discrimination is negatively associated with migrant children's psychological adjustments. These findings suggest that effective interventions should be developed to improve migrant children's capacities to cope
Electrical Characterization of Special Purpose Linear Microcircuits.
1980-05-01
Voltage gain .... GE Voltage gain error GE General Electric Company GEOS General Electric Company , Ordnance Systems GND Ground Iadj Adjustment pin... company , usually prepares a proposed spec. Ideally, the device manufacturers would like to have these proposed specs incorporated without further...Sevastopoulos et al, National Semiconductor Corporation, 1978. 3. RADC-TR-78-22 Final Technical Report: J. S. Kulpinski et al, General Electric Company 1978
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-11
... Surface Transportation Board Railroad Cost Recovery Procedures--Productivity Adjustment; Quarterly Rail... Railroads that the Board restate the previously published productivity adjustment for the 2003-2007 averaging period (2007 productivity adjustment) so that it tracks the 2007 productivity adjustment...
Minimal Solution of Singular LR Fuzzy Linear Systems
Nikuie, M.; Ahmad, M. Z.
2014-01-01
In this paper, the singular LR fuzzy linear system is introduced. Such systems are divided into two parts: singular consistent LR fuzzy linear systems and singular inconsistent LR fuzzy linear systems. The capability of the generalized inverses such as Drazin inverse, pseudoinverse, and {1}-inverse in finding minimal solution of singular consistent LR fuzzy linear systems is investigated. PMID:24737977
Adjustable mount for electro-optic transducers in an evacuated cryogenic system
NASA Technical Reports Server (NTRS)
Crossley, Edward A., Jr. (Inventor); Haynes, David P. (Inventor); Jones, Howard C. (Inventor); Jones, Irby W. (Inventor)
1987-01-01
The invention is an adjustable mount for positioning an electro-optic transducer in an evacuated cryogenic environment. Electro-optic transducers are used in this manner as high sensitivity detectors of gas emission lines of spectroscopic analysis. The mount is made up of an adjusting mechanism and a transducer mount. The adjusting mechanism provided five degrees of freedom, linear adjustments and angular adjustments. The mount allows the use of an internal lens to focus energy on the transducer element thereby improving the efficiency of the detection device. Further, the transducer mount, although attached to the adjusting mechanism, is isolated thermally such that a cryogenic environment can be maintained at the transducer while the adjusting mechanism remains at room temperature. Radiation shields also are incorporated to further reduce heat flow to the transducer location.
Adjustable Optical-Fiber Attenuator
NASA Technical Reports Server (NTRS)
Buzzetti, Mike F.
1994-01-01
Adjustable fiber-optic attenuator utilizes bending loss to reduce strength of light transmitted along it. Attenuator functions without introducing measurable back-reflection or insertion loss. Relatively insensitive to vibration and changes in temperature. Potential applications include cable television, telephone networks, other signal-distribution networks, and laboratory instrumentation.
Dyadic Adjustment: An Ecosystemic Examination.
ERIC Educational Resources Information Center
Wilson, Stephan M.; Larson, Jeffry H.; McCulloch, B. Jan; Stone, Katherine L.
1997-01-01
Examines the relationship of background, individual, and family influences on dyadic adjustment, using an ecological perspective. Data from 102 married couples were used. Age at marriage for husbands, emotional health for wives, and number of marriage and family problems as well as family life satisfaction for both were related to dyadic…
Problems of Adjustment to School.
ERIC Educational Resources Information Center
Bartolini, Leandro A.
This paper, one of several written for a comprehensive policy study of early childhood education in Illinois, examines and summarizes the literature on the problems of young children in adjusting to starting school full-time and describes the nature and extent of their difficulties in relation to statewide educational policy. The review of studies…
Economic Pressures and Family Adjustment.
ERIC Educational Resources Information Center
Haccoun, Dorothy Markiewicz; Ledingham, Jane E.
The relationships between economic stress on the family and child and parental adjustment were examined for a sample of 199 girls and boys in grades one, four, and seven. These associations were examined separately for families in which both parents were present and in which mothers only were at home. Economic stress was associated with boys'…
Linear Logistic Test Modeling with R
ERIC Educational Resources Information Center
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Linear determining equations for differential constraints
Kaptsov, O V
1998-12-31
A construction of differential constraints compatible with partial differential equations is considered. Certain linear determining equations with parameters are used to find such differential constraints. They generalize the classical determining equations used in the search for admissible Lie operators. As applications of this approach equations of an ideal incompressible fluid and non-linear heat equations are discussed.
A Constrained Linear Estimator for Multiple Regression
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.
2010-01-01
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
A neural network for bounded linear programming
Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )
1989-01-01
The purpose of this paper is to describe a neural network implementation of an algorithm recently designed at ORNL to solve the Transportation and the Assignment Problems, and, more generally, any explicitly bounded linear program. 9 refs.
Adjustment Issues Affecting Employment for Immigrants from the Former Soviet Union.
ERIC Educational Resources Information Center
Yost, Anastasia Dimun; Lucas, Margaretha S.
2002-01-01
Describes major issues, including culture shock and loss of status, that affect general adjustment of immigrants and refugees from the former Soviet Union who are resettling in the United States. Issues that affect career and employment adjustment are described and the interrelatedness of general and career issues is explored. (Contains 39…
Stanish, W M; Chi, G Y; Johnson, W D; Koch, G G; Landis, J R; Liu-Chi, S
1978-09-01
CRISCAT is a computer program for the analysis of grouped survival data with competing risks via weighted least squares methods. Competing risks adjustments are obtained from general matrix operations using many of the strategies employed in a previously developed program (GENCAT) for multivariate categorical data. CRISCAT computes survival rates at several time points for multiple causes of failure, where each rate is adjusted for other causes in the sense that failure due to thes other causes has been eliminated as a risk. The program can generate functions of the adjusted survival rates, to which asymptotic regression models may be fit. CRISCAT yields test statistics for hypotheses involving either these functions or estimated model parameters. Thus, this computational algorithm links competing risks theory to linear models methods for contingency table analysis and provides a unified approach to estimation and hypothesis testing of functions involving competing risks adjusted rates.
NASA Astrophysics Data System (ADS)
Young, T.
This book is intended to be used as a textbook in a one-semester course at a variety of levels. Because of self-study features incorporated, it may also be used by practicing electronic engineers as a formal and thorough introduction to the subject. The distinction between linear and digital integrated circuits is discussed, taking into account digital and linear signal characteristics, linear and digital integrated circuit characteristics, the definitions for linear and digital circuits, applications of digital and linear integrated circuits, aspects of fabrication, packaging, and classification and numbering. Operational amplifiers are considered along with linear integrated circuit (LIC) power requirements and power supplies, voltage and current regulators, linear amplifiers, linear integrated circuit oscillators, wave-shaping circuits, active filters, DA and AD converters, demodulators, comparators, instrument amplifiers, current difference amplifiers, analog circuits and devices, and aspects of troubleshooting.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
... equipment? How is safety ensured? What is this equipment used for? A linear accelerator (LINAC) is the ... Therapy (SBRT) . top of page How does the equipment work? The linear accelerator uses microwave technology (similar ...
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-02-27
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data.
NASA Astrophysics Data System (ADS)
Hilbert, Bryan
2012-10-01
These observations will be used to monitor the signal non-linearity of the IR channel, as well as to update the IR channel non-linearity calibration reference file. The non-linearity behavior of each pixel in the detector will be investigated through the use of full frame and subarray flat fields, while the photometric behavior of point sources will be studied using observations of 47 Tuc. This is a continuation of the Cycle 19 non-linearity monitor, program 12696.
NASA Astrophysics Data System (ADS)
Hilbert, Bryan
2013-10-01
These observations will be used to monitor the signal non-linearity of the IR channel, as well as to update the IR channel non-linearity calibration reference file. The non-linearity behavior of each pixel in the detector will be investigated through the use of full frame and subarray flat fields, while the photometric behavior of point sources will be studied using observations of 47 Tuc. This is a continuation of the Cycle 20 non-linearity monitor, program 13079.
Performance of An Adjustable Strength Permanent Magnet Quadrupole
Gottschalk, S.C.; DeHart, T.E.; Kangas, K.W.; Spencer, C.M.; Volk, J.T.; /Fermilab
2006-03-01
An adjustable strength permanent magnet quadrupole suitable for use in Next Linear Collider has been built and tested. The pole length is 42cm, aperture diameter 13mm, peak pole tip strength 1.03Tesla and peak integrated gradient * length (GL) is 68.7 Tesla. This paper describes measurements of strength, magnetic CL and field quality made using an air bearing rotating coil system. The magnetic CL stability during -20% strength adjustment proposed for beam based alignment was < 0.2 microns. Strength hysteresis was negligible. Thermal expansion of quadrupole and measurement parts caused a repeatable and easily compensated change in the vertical magnetic CL. Calibration procedures as well as CL measurements made over a wider tuning range of 100% to 20% in strength useful for a wide range of applications will be described. The impact of eddy currents in the steel poles on the magnetic field during strength adjustments will be reported.
Hall, Anne E
2016-08-11
The Bureau of Economic Analysis recently created new price indexes for health care in its health care satellite account and now faces the problem of how to adjust them for quality. I review the literature on this topic and divide the articles that created quality-adjusted price indexes for individual medical conditions into those that use primarily outcomes-based adjustments and those that use only process-based adjustments. Outcomes-based adjustments adjust the indexes based on observed aggregate health outcomes, usually mortality. Process-based adjustments adjust the indexes based on the treatments provided and medical knowledge of their effectiveness. Outcomes-based adjustments are easier to implement, while process-based adjustments are more demanding in terms of data and medical knowledge. In general, the research literature shows adjusting for quality in the measurement of output in the medical sector to be quantitatively important.
Simulation of a medical linear accelerator for teaching purposes.
Anderson, Rhys; Lamey, Michael; MacPherson, Miller; Carlone, Marco
2015-05-08
Simulation software for medical linear accelerators that can be used in a teaching environment was developed. The components of linear accelerators were modeled to first order accuracy using analytical expressions taken from the literature. The expressions used constants that were empirically set such that realistic response could be expected. These expressions were programmed in a MATLAB environment with a graphical user interface in order to produce an environment similar to that of linear accelerator service mode. The program was evaluated in a systematic fashion, where parameters affecting the clinical properties of medical linear accelerator beams were adjusted independently, and the effects on beam energy and dose rate recorded. These results confirmed that beam tuning adjustments could be simulated in a simple environment. Further, adjustment of service parameters over a large range was possible, and this allows the demonstration of linear accelerator physics in an environment accessible to both medical physicists and linear accelerator service engineers. In conclusion, a software tool, named SIMAC, was developed to improve the teaching of linear accelerator physics in a simulated environment. SIMAC performed in a similar manner to medical linear accelerators. The authors hope that this tool will be valuable as a teaching tool for medical physicists and linear accelerator service engineers.
Synchrotron Tune Adjustment by Longitudinal Motion of Quadrupoles
NASA Astrophysics Data System (ADS)
Bertsche, K. J.
1996-05-01
Adjustment of the tune of a synchrotron is generally accomplished by globally varying the strength of quadrupoles, either in the main quadrupole bus or in a set of dedicated trim quadrupoles distributed around the ring. An alternate scheme for tune control involves varying the strengths of quadrupoles only within a local insert, thereby adjusting the phase advance across this insert to create a "phase trombone." In a synchrotron built of permanent magnets, such as the proposed Fermilab Recycler Ring, tune adjustment may also be accomplished by constructing a phase trombone in which the longitudinal position rather than strength of a number of quadrupoles is adjusted. Design philosophies and performance for such phase trombones will be presented. *Operated by Universities Research Association, Inc., under contract with the US. Department of Energy.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Invertible linear ordinary differential operators
NASA Astrophysics Data System (ADS)
Chetverikov, Vladimir N.
2017-03-01
We consider invertible linear ordinary differential operators whose inversions are also differential operators. To each such operator we assign a numerical table. These tables are described in the elementary geometrical language. The table does not uniquely determine the operator. To define this operator uniquely some additional information should be added, as it is described in detail in this paper. The possibility of generalization of these results to partial differential operators is also discussed.
12 CFR 19.240 - Inflation adjustments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount of each civil money penalty within the OCC's jurisdiction is adjusted in accordance with the...
12 CFR 19.240 - Inflation adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...
12 CFR 19.240 - Inflation adjustments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 1 2011-01-01 2011-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...
Adjusting to University: The Hong Kong Experience
ERIC Educational Resources Information Center
Yau, Hon Keung; Sun, Hongyi; Cheng, Alison Lai Fong
2012-01-01
Students' adjustment to the university environment is an important factor in predicting university outcomes and is crucial to their future achievements. University support to students' transition to university life can be divided into three dimensions: academic adjustment, social adjustment and psychological adjustment. However, these…
Learning from observation, feedback, and intervention in linear and non-linear task environments.
Henriksson, Maria P; Enkvist, Tommy
2016-12-12
This multiple-cue judgment study investigates whether we can manipulate the judgment strategy and increase accuracy in linear and non-linear cue-criterion environments just by changing the training mode. Three experiments show that accuracy in simple linear additive task environments are improved with feedback training and intervention training, while accuracy in complex multiplicative tasks are improved with observational training. The observed interaction effect suggests that the training mode invites different strategies that are adjusted as a function of experience to the demands from the underlying cue-criterion structure. Thus, feedback and the intervention training modes invite cue abstraction, an effortful but successful strategy in combination with simple linear task structures, and observational training invites exemplar memory processes, a simple but successful strategy in combination with complex non-linear task structures. The study discusses adaptive cognition and the implication of the different training modes across a life span and for clinical populations.
Communications circuit including a linear quadratic estimator
Ferguson, Dennis D.
2015-07-07
A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.
Rotorcraft Smoothing Via Linear Time Periodic Methods
2007-07-01
Optimal Control Methodology for Rotor Vibration Smoothing . . 30 vii Page IV. Mathematic Foundations of Linear Time Periodic Systems . . . . 33 4.1 The...62 6.3 The Maximum Likelihood Estimator . . . . . . . . . . . 63 6.4 The Cramer-Rao Inequality . . . . . . . . . . . . . . . . 66 6.4.1 Statistical ...adjustments for vibration reduction. 2.2.2.4 1980’s to late 1990’s. Rotor vibrational reduction methods during the 1980’s began to adopt a mathematical
Focus adjustment effects on visual acuity and oculomotor balance with aviator night vision displays.
Kotulak, J C; Morse, S E
1994-04-01
Sixteen U.S. Army aviators, who were given training on focus adjustment technique with aviator night vision goggles (NVG), showed an improvement in visual acuity with focus adjustment compared to a fixed infinity focus control. The long-term effect of focus adjustment on vision was not measured; however, adjustment accuracy was found to be generally within acceptable limits based on computer modeling and available physiologic data. Fixed focus eyepieces that are set to a low minus power may partially compensate for instrument myopia, but they may not optimize visual acuity to the extent that adjustable focus eyepieces do. Eyepiece adjustment proficiency with present night vision devices can be improved through training that emphasizes focusing to the least possible minus dioptric power. Future night vision displays can minimize focus misadjustment by providing a tactile zero marking, a limited dioptric adjustment range, and a focusing knob capable of finer adjustment than is available with current NVG's.
20 CFR 229.51 - Adjustment of age reduction.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee...
20 CFR 229.51 - Adjustment of age reduction.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee...
20 CFR 229.51 - Adjustment of age reduction.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee...
20 CFR 229.51 - Adjustment of age reduction.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee...
20 CFR 229.51 - Adjustment of age reduction.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee...
Lower Esophageal Thickening Due to a Laparoscopic Adjustable Gastric Band.
Makker, Jitin; Conklin, Jeffrey; Muthusamy, V Raman
2015-10-01
Laparoscopic adjustable gastric band (LAGB) is a surgical device to treat obesity that is widely used and generally considered to be safe. We report an adverse event related to the physiological and mechanical changes that occur after LAGB placement, namely chronic obstruction resulting in marked lower esophageal thickening.
Lower Esophageal Thickening Due to a Laparoscopic Adjustable Gastric Band
Makker, Jitin; Conklin, Jeffrey
2015-01-01
Laparoscopic adjustable gastric band (LAGB) is a surgical device to treat obesity that is widely used and generally considered to be safe. We report an adverse event related to the physiological and mechanical changes that occur after LAGB placement, namely chronic obstruction resulting in marked lower esophageal thickening. PMID:26504870
24 CFR 200.16 - Project mortgage adjustments and reductions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Project mortgage adjustments and... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Requirements for Application... Programs; and Continuing Eligibility Requirements for Existing Projects Eligible Mortgage § 200.16...
24 CFR 200.16 - Project mortgage adjustments and reductions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Project mortgage adjustments and... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Requirements for Application... Programs; and Continuing Eligibility Requirements for Existing Projects Eligible Mortgage § 200.16...
The Impact of Structural Adjustment on Training Needs.
ERIC Educational Resources Information Center
Lucas, Robert E. B.
1994-01-01
During structural adjustment, training/retraining for unemployed persons is often poorly conceived. One reason is the lack of reliable ways to predict future skill requirements. Retraining should be kept fairly general to enable a wide range of potential jobs. (SK)
Exploring the Adjustment Problems among International Graduate Students in Hawaii
ERIC Educational Resources Information Center
Yang, Stephanie; Salzman, Michael; Yang, Cheng-Hong
2015-01-01
Due to the advance of technology, the American society has become more diverse. A huge population of international students in the U.S. faces unique issues. According to the existing literature, the top-rated anxieties international student faces are generally caused by language anxiety, cultural adjustments, and learning differences and barriers.…
42 CFR 403.750 - Estimate of expenditures and adjustments.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 2 2014-10-01 2014-10-01 false Estimate of expenditures and adjustments. 403.750 Section 403.750 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS SPECIAL PROGRAMS AND PROJECTS Religious Nonmedical Health Care...
LINPACK. Simultaneous Linear Algebraic Equations
Miller, M.A.
1990-05-01
LINPACK is a collection of FORTRAN subroutines which analyze and solve various classes of systems of simultaneous linear algebraic equations. The collection deals with general, banded, symmetric indefinite, symmetric positive definite, triangular, and tridiagonal square matrices, as well as with least squares problems and the QR and singular value decompositions of rectangular matrices. A subroutine-naming convention is employed in which each subroutine name consists of five letters which represent a coded specification (TXXYY) of the computation done by that subroutine. The first letter, T, indicates the matrix data type. Standard FORTRAN allows the use of three such types: S REAL, D DOUBLE PRECISION, and C COMPLEX. In addition, some FORTRAN systems allow a double-precision complex type: Z COMPLEX*16. The second and third letters of the subroutine name, XX, indicate the form of the matrix or its decomposition: GE General, GB General band, PO Positive definite, PP Positive definite packed, PB Positive definite band, SI Symmetric indefinite, SP Symmetric indefinite packed, HI Hermitian indefinite, HP Hermitian indefinite packed, TR Triangular, GT General tridiagonal, PT Positive definite tridiagonal, CH Cholesky decomposition, QR Orthogonal-triangular decomposition, SV Singular value decomposition. The final two letters, YY, indicate the computation done by the particular subroutine: FA Factor, CO Factor and estimate condition, SL Solve, DI Determinant and/or inverse and/or inertia, DC Decompose, UD Update, DD Downdate, EX Exchange. The LINPACK package also includes a set of routines to perform basic vector operations called the Basic Linear Algebra Subprograms (BLAS).
LINPACK. Simultaneous Linear Algebraic Equations
Dongarra, J.J.
1982-05-02
LINPACK is a collection of FORTRAN subroutines which analyze and solve various classes of systems of simultaneous linear algebraic equations. The collection deals with general, banded, symmetric indefinite, symmetric positive definite, triangular, and tridiagonal square matrices, as well as with least squares problems and the QR and singular value decompositions of rectangular matrices. A subroutine-naming convention is employed in which each subroutine name consists of five letters which represent a coded specification (TXXYY) of the computation done by that subroutine. The first letter, T, indicates the matrix data type. Standard FORTRAN allows the use of three such types: S REAL, D DOUBLE PRECISION, and C COMPLEX. In addition, some FORTRAN systems allow a double-precision complex type: Z COMPLEX*16. The second and third letters of the subroutine name, XX, indicate the form of the matrix or its decomposition: GE General, GB General band, PO Positive definite, PP Positive definite packed, PB Positive definite band, SI Symmetric indefinite, SP Symmetric indefinite packed, HI Hermitian indefinite, HP Hermitian indefinite packed, TR Triangular, GT General tridiagonal, PT Positive definite tridiagonal, CH Cholesky decomposition, QR Orthogonal-triangular decomposition, SV Singular value decomposition. The final two letters, YY, indicate the computation done by the particular subroutine: FA Factor, CO Factor and estimate condition, SL Solve, DI Determinant and/or inverse and/or inertia, DC Decompose, UD Update, DD Downdate, EX Exchange. The LINPACK package also includes a set of routines to perform basic vector operations called the Basic Linear Algebra Subprograms (BLAS).
Kocalevent, Rüya-Daniela; Mierke, Annett; Danzer, Gerhard; Klapp, Burghard F.
2014-01-01
Objective Adjustment disorders are re-conceptualized in the DSM-5 as a stress-related disorder; however, besides the impact of an identifiable stressor, the specification of a stress concept, remains unclear. This study is the first to examine an existing stress-model from the general population, in patients diagnosed with adjustment disorders, using a longitudinal design. Methods The study sample consisted of 108 patients consecutively admitted for adjustment disorders. Associations of stress perception, emotional distress, resources, and mental health were measured at three time points: the outpatients’ presentation, admission for inpatient treatment, and discharge from the hospital. To evaluate a longitudinal stress model of ADs, we examined whether stress at admission predicted mental health at each of the three time points using multiple linear regressions and structural equation modeling. A series of repeated-measures one-way analyses of variance (rANOVAs) was performed to assess change over time. Results Significant within-participant changes from baseline were observed between hospital admission and discharge with regard to mental health, stress perception, and emotional distress (p<0.001). Stress perception explained nearly half of the total variance (44%) of mental health at baseline; the adjusted R2 increased (0.48), taking emotional distress (i.e., depressive symptoms) into account. The best predictor of mental health at discharge was the level of emotional distress (i.e., anxiety level) at baseline (β = −0.23, R2corr = 0.56, p<0.001). With a CFI of 0.86 and an NFI of 0.86, the fit indices did not allow for acceptance of the stress-model (Cmin/df = 15.26; RMSEA = 0.21). Conclusions Stress perception is an important predictor in adjustment disorders, and mental health-related treatment goals are dependent on and significantly impacted by stress perception and emotional distress. PMID:24825165
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III (Inventor); Crossley, Edward A., Jr. (Inventor); Jones, Irby W. (Inventor); Miller, James B. (Inventor); Davis, C. Calvin (Inventor); Behun, Vaughn D. (Inventor); Goodrich, Lewis R., Sr. (Inventor)
1992-01-01
A linear mass actuator includes an upper housing and a lower housing connectable to each other and having a central passageway passing axially through a mass that is linearly movable in the central passageway. Rollers mounted in the upper and lower housings in frictional engagement with the mass translate the mass linearly in the central passageway and drive motors operatively coupled to the roller means, for rotating the rollers and driving the mass axially in the central passageway.
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Chen, Qingwen; Narayanan, Kumaran
2015-01-01
Recombineering is a powerful genetic engineering technique based on homologous recombination that can be used to accurately modify DNA independent of its sequence or size. One novel application of recombineering is the assembly of linear BACs in E. coli that can replicate autonomously as linear plasmids. A circular BAC is inserted with a short telomeric sequence from phage N15, which is subsequently cut and rejoined by the phage protelomerase enzyme to generate a linear BAC with terminal hairpin telomeres. Telomere-capped linear BACs are protected against exonuclease attack both in vitro and in vivo in E. coli cells and can replicate stably. Here we describe step-by-step protocols to linearize any BAC clone by recombineering, including inserting and screening for presence of the N15 telomeric sequence, linearizing BACs in vivo in E. coli, extracting linear BACs, and verifying the presence of hairpin telomere structures. Linear BACs may be useful for functional expression of genomic loci in cells, maintenance of linear viral genomes in their natural conformation, and for constructing innovative artificial chromosome structures for applications in mammalian and plant cells.
Inflation Adjustments for Defense Acquisition
2014-10-01
Harmon Daniel B. Levine Stanley A. Horowitz, Project Leader INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882 Approved...T U T E F O R D E F E N S E A N A L Y S E S IDA Document D-5112 Inflation Adjustments for Defense Acquisition Bruce R. Harmon Daniel B. Levine...might do a better job? The focus of the study is on aircraft procurement. By way of terminology , “cost index,” “price index,” and “deflator” are used
Adjustable extender for instrument module
Sevec, J.B.; Stein, A.D.
1975-11-01
A blank extender module used to mount an instrument module in front of its console for repair or test purposes has been equipped with a rotatable mount and means for locking the mount at various angles of rotation for easy accessibility. The rotatable mount includes a horizontal conduit supported by bearings within the blank module. The conduit is spring-biased in a retracted position within the blank module and in this position a small gear mounted on the conduit periphery is locked by a fixed pawl. The conduit and instrument mount can be pulled into an extended position with the gear clearing the pawl to permit rotation and adjustment of the instrument.
Systems of Inhomogeneous Linear Equations
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
Richter, B.
1985-12-01
A report is given on the goals and progress of the SLAC Linear Collider. The status of the machine and the detectors are discussed and an overview is given of the physics which can be done at this new facility. Some ideas on how (and why) large linear colliders of the future should be built are given.
Linear Equations: Equivalence = Success
ERIC Educational Resources Information Center
Baratta, Wendy
2011-01-01
The ability to solve linear equations sets students up for success in many areas of mathematics and other disciplines requiring formula manipulations. There are many reasons why solving linear equations is a challenging skill for students to master. One major barrier for students is the inability to interpret the equals sign as anything other than…
Linearization of Robot Manipulators
NASA Technical Reports Server (NTRS)
Kreutz, Kenneth
1987-01-01
Four nonlinear control schemes equivalent. Report discusses theory of nonlinear feedback control of robot manipulator, emphasis on control schemes making manipulator input and output behave like decoupled linear system. Approach, called "exact external linearization," contributes efforts to control end-effector trajectories, positions, and orientations.
Equity flotation cost adjustments in utilities' cost of service
Bierman, H. Jr.; Hass, J.E.
1984-03-01
Recovery of the unavoidable costs of issuing new shares of stock is generally agreed to be appropriate in determining utility revenue requirements. This article suggests that the methods by which that is usually accomplished are of questionable accuracy. The conventional practice of adjusting the allowed rate of return on common equity is examined, and an improved adjustment formulation is presented. Acknowledging that application of the formula remains subject to considerable error, however, the authors propose yet another solution. Capitalization of flotation costs as intangible assets is suggested as a way of more accurately factoring such expenses into tariff determinations. 6 references.
Adjustable Josephson Coupler for Transmon Qubit Measurement
NASA Astrophysics Data System (ADS)
Jeffrey, Evan
2015-03-01
Transmon qubits are measured via a dispersive interaction with a linear resonator. In order to be scalable this measurement must be fast, accurate, and not disrupt the state of the qubit. Speed is of particular importance in a scalable architecture with error correction as the measurement accounts for substantial portion of the cycle time and waiting time associated with measurement is a major source of decoherence. We have found that measurement speed and accuracy can be improved by driving the qubit beyond the critical photon number ncrit = Δ/4g by a factor of 2-3 without compromising the QND nature of the measurement. While it is expected that such strong drive will cause qubit state transitions, we find that as long as the readout is sufficiently fast, those transitions are negligible, however they grow rapidly with time, and are not described by a simple rate. Measuring in this regime requires parametric amplifiers with very high saturation power, on the order of -105 dBm in order to avoid losing SNR when increasing the power. It also requires a Purcell filter to allow fast ring-up and ring-down. Adjustable couplers can be used to further increase the measurement performance, by switching the dispersive interaction on and off much faster than the cavity ring-down time. This technique can also be used to investigate the dynamics of the qubit cavity interaction beyond the weak dispersive limit ncavity >=ncrit not easily accessible to standard dispersive measurement due to the cavity time constant.
Linear models: permutation methods
Cade, B.S.; Everitt, B.S.; Howell, D.C.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
NASA Technical Reports Server (NTRS)
Clancy, John P.
1988-01-01
The object of the invention is to provide a mechanical force actuator which is lightweight and manipulatable and utilizes linear motion for push or pull forces while maintaining a constant overall length. The mechanical force producing mechanism comprises a linear actuator mechanism and a linear motion shaft mounted parallel to one another. The linear motion shaft is connected to a stationary or fixed housing and to a movable housing where the movable housing is mechanically actuated through actuator mechanism by either manual means or motor means. The housings are adapted to releasably receive a variety of jaw or pulling elements adapted for clamping or prying action. The stationary housing is adapted to be pivotally mounted to permit an angular position of the housing to allow the tool to adapt to skewed interfaces. The actuator mechanisms is operated by a gear train to obtain linear motion of the actuator mechanism.
Adjusting the Contour of Reflector Panels
NASA Technical Reports Server (NTRS)
Palmer, W. B.; Giebler, M. M.
1984-01-01
Postfabrication adjustment of contour of panels for reflector, such as parabolic reflector for radio antennas, possible with simple mechanism consisting of threaded stud, two nuts, and flexure. Contours adjusted manually.
48 CFR 1450.103 - Contract adjustments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Contract adjustments. 1450.103 Section 1450.103 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACT... Contract adjustments....
First Year Adjustment in the Secondary School.
ERIC Educational Resources Information Center
Loosemore, Jean Ann
1978-01-01
This study investigated the relationship between adjustment to secondary school and 17 cognitive and noncognitive variables, including intelligence (verbal and nonverbal reasoning), academic achievement, extraversion-introversion, stable/unstable, social adjustment, endeavor, age, sex, and school form. (CP)
49 CFR 393.53 - Automatic brake adjusters and brake adjustment indicators.
Code of Federal Regulations, 2013 CFR
2013-10-01
... brake adjustment indicators. (a) Automatic brake adjusters (hydraulic brake systems). Each commercial motor vehicle manufactured on or after October 20, 1993, and equipped with a hydraulic brake...
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 11 2014-01-01 2014-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 11 2012-01-01 2012-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 11 2013-01-01 2013-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 11 2011-01-01 2011-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
7 CFR 1744.64 - Budget adjustment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Budget adjustment. 1744.64 Section 1744.64... Disbursement of Funds § 1744.64 Budget adjustment. (a) If more funds are required than are available in a budget account, the borrower may request RUS's approval of a budget adjustment to use funds from...
24 CFR 5.611 - Adjusted income.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Adjusted income. 5.611 Section 5... Serving Persons with Disabilities: Family Income and Family Payment; Occupancy Requirements for Section 8 Project-Based Assistance Family Income § 5.611 Adjusted income. Adjusted income means annual income...
24 CFR 5.611 - Adjusted income.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Adjusted income. 5.611 Section 5... Serving Persons with Disabilities: Family Income and Family Payment; Occupancy Requirements for Section 8 Project-Based Assistance Family Income § 5.611 Adjusted income. Adjusted income means annual income...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
12 CFR 313.55 - Salary adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Salary adjustments. 313.55 Section 313.55 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF PRACTICE PROCEDURES FOR CORPORATE DEBT COLLECTION Salary Offset § 313.55 Salary adjustments. Any negative adjustment to pay...
12 CFR 1780.80 - Inflation adjustments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Inflation adjustments. 1780.80 Section 1780.80... DEVELOPMENT RULES OF PRACTICE AND PROCEDURE RULES OF PRACTICE AND PROCEDURE Civil Money Penalty Inflation Adjustments § 1780.80 Inflation adjustments. The maximum amount of each civil money penalty within...
12 CFR 1780.80 - Inflation adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Inflation adjustments. 1780.80 Section 1780.80... DEVELOPMENT RULES OF PRACTICE AND PROCEDURE RULES OF PRACTICE AND PROCEDURE Civil Money Penalty Inflation Adjustments § 1780.80 Inflation adjustments. The maximum amount of each civil money penalty within...
34 CFR 36.2 - Penalty adjustment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false Penalty adjustment. 36.2 Section 36.2 Education Office of the Secretary, Department of Education ADJUSTMENT OF CIVIL MONETARY PENALTIES FOR INFLATION § 36.2..., Section 36.2—Civil Monetary Penalty Inflation Adjustments Statute Description New maximum (and minimum,...
34 CFR 36.2 - Penalty adjustment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 1 2011-07-01 2011-07-01 false Penalty adjustment. 36.2 Section 36.2 Education Office of the Secretary, Department of Education ADJUSTMENT OF CIVIL MONETARY PENALTIES FOR INFLATION § 36.2..., Section 36.2—Civil Monetary Penalty Inflation Adjustments Statute Description New maximum (and minimum,...
26 CFR 1.56-0 - Table of contents to § 1.56-1, adjustment for book income of corporations.
Code of Federal Regulations, 2014 CFR
2014-04-01
... book income of corporations. 1.56-0 Section 1.56-0 Internal Revenue INTERNAL REVENUE SERVICE..., adjustment for book income of corporations. (a) Computation of the book income adjustment. (1) In general. (2) Taxpayers subject to the book income adjustment. (3) Consolidated returns. (4) Examples. (b) Adjusted...
26 CFR 1.56-0 - Table of contents to § 1.56-1, adjustment for book income of corporations.
Code of Federal Regulations, 2013 CFR
2013-04-01
... book income of corporations. 1.56-0 Section 1.56-0 Internal Revenue INTERNAL REVENUE SERVICE..., adjustment for book income of corporations. (a) Computation of the book income adjustment. (1) In general. (2) Taxpayers subject to the book income adjustment. (3) Consolidated returns. (4) Examples. (b) Adjusted...
26 CFR 1.56-0 - Table of contents to § 1.56-1, adjustment for book income of corporations.
Code of Federal Regulations, 2011 CFR
2011-04-01
... book income of corporations. 1.56-0 Section 1.56-0 Internal Revenue INTERNAL REVENUE SERVICE..., adjustment for book income of corporations. (a) Computation of the book income adjustment. (1) In general. (2) Taxpayers subject to the book income adjustment. (3) Consolidated returns. (4) Examples. (b) Adjusted...
Linear ubiquitination in immunity.
Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning
2015-07-01
Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types.
On the unnecessary ubiquity of hierarchical linear modeling.
McNeish, Daniel; Stapleton, Laura M; Silverman, Rebecca D
2017-03-01
In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record
The dyadic adjustment of female-to-male transsexuals.
Fleming, M; MacGowan, B; Costos, D
1985-02-01
Dyadic adjustment, sexual activities, and marital stability in the relationships of female-to-male transsexuals and their spouses were examined. Participants were 22 female-to-male transsexuals who had undergone some form of surgery to alter their anatomical sex, their spouses, and a control group of married or cohabitating nontranssexual men and women. Participants were administered the Dyadic Adjustment Scale and additional items to assess quantitatively their marital relationships. The transsexuals and their spouses were also asked open-ended interview questions concerning marital and life adjustments. Generally, the transsexuals and their spouses reported good and mutually satisfying interpersonal relationships that are in many ways comparable to those of the matched control group. These findings lend support to the previous clinical interview studies that have reported that female-to-male transsexuals form stable and enduring intimate relationships.
1979-12-01
OPTIMAL LINEAR CONTROL C.A. HARVEY M.G. SAFO NOV G. STEIN J.C. DOYLE HONEYWELL SYSTEMS & RESEARCH CENTER j 2600 RIDGWAY PARKWAY j [ MINNEAPOLIS...RECIPIENT’S CAT ALC-’ W.IMIJUff’? * J~’ CR2 15-238-4F TP P EI)ŕll * (~ Optimal Linear Control ~iOGRPR UBA m a M.G Lnar o Con_ _ _ _ _ _ R PORT__ _ _ I RE...Characterizations of optimal linear controls have been derived, from which guides for selecting the structure of the control system and the weights in
NASA Technical Reports Server (NTRS)
Studer, P. A. (Inventor)
1983-01-01
A linear magnetic bearing system having electromagnetic vernier flux paths in shunt relation with permanent magnets, so that the vernier flux does not traverse the permanent magnet, is described. Novelty is believed to reside in providing a linear magnetic bearing having electromagnetic flux paths that bypass high reluctance permanent magnets. Particular novelty is believed to reside in providing a linear magnetic bearing with a pair of axially spaced elements having electromagnets for establishing vernier x and y axis control. The magnetic bearing system has possible use in connection with a long life reciprocating cryogenic refrigerator that may be used on the space shuttle.
Reversibility of a Symmetric Linear Cellular Automata
NASA Astrophysics Data System (ADS)
Del Rey, A. Martín; Sánchez, G. Rodríguez
The characterization of the size of the cellular space of a particular type of reversible symmetric linear cellular automata is introduced in this paper. Specifically, it is shown that those symmetric linear cellular with 2k + 1 cells, and whose transition matrix is a k-diagonal square band matrix with nonzero entries equal to 1 are reversible. Furthermore, in this case the inverse cellular automata are explicitly computed. Moreover, the reversibility condition is also studied for a general number of cells.
Gain optimization with non-linear controls
NASA Technical Reports Server (NTRS)
Slater, G. L.; Kandadai, R. D.
1984-01-01
An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.
Adjustment Disorder: epidemiology, diagnosis and treatment
2009-01-01
Background Adjustment Disorder is a condition strongly tied to acute and chronic stress. Despite clinical suggestion of a large prevalence in the general population and the high frequency of its diagnosis in the clinical settings, there has been relatively little research reported and, consequently, very few hints about its treatments. Methods the authors gathered old and current information on the epidemiology, clinical features, comorbidity, treatment and outcome of adjustment disorder by a systematic review of essays published on PUBMED. Results After a first glance at its historical definition and its definition in the DSM and ICD systems, the problem of distinguishing AD from other mood and anxiety disorders, the difficulty in the definition of stress and the implied concept of 'vulnerability' are considered. Comorbidity of AD with other conditions, and outcome of AD are then analyzed. This review also highlights recent data about trends in the use of antidepressant drugs, evidence on their efficacy and the use of psychotherapies. Conclusion AD is a very common diagnosis in clinical practice, but we still lack data about its rightful clinical entity. This may be caused by a difficulty in facing, with a purely descriptive methods, a "pathogenic label", based on a stressful event, for which a subjective impact has to be considered. We lack efficacy surveys concerning treatment. The use of psychotropic drugs such as antidepressants, in AD with anxious or depressed mood is not properly supported and should be avoided, while the usefulness of psychotherapies is more solidly supported by clinical evidence. To better determine the correct course of therapy, randomized-controlled trials, even for the combined use of drugs and psychotherapies, are needed vitally, especially for the resistant forms of AD. PMID:19558652
49 CFR 393.47 - Brake actuators, slack adjusters, linings/pads and drums/rotors.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Brake actuators, slack adjusters, linings/pads and..., slack adjusters, linings/pads and drums/rotors. (a) General requirements. Brake components must be... be the same. (d) Linings and pads. The thickness of the brake linings or pads shall meet...
The Impact of Custodial Arrangement on the Adjustment of Recently Divorced Fathers.
ERIC Educational Resources Information Center
Stewart, James R.; And Others
1986-01-01
Explored the impact of custodial arrangement on the adjustment of recently divorced fathers. Indicated that divorced fathers with custody exhibited less depression and anxiety and fewer problems in general adjustment than those without custody. Results highlight the importance of the children's presence as a facilitative, stabilizing factor in…
13 CFR 307.2 - Criteria for Economic Adjustment Assistance Investments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Criteria for Economic Adjustment Assistance Investments. 307.2 Section 307.2 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.2 Criteria...
13 CFR 307.2 - Criteria for Economic Adjustment Assistance Investments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Criteria for Economic Adjustment Assistance Investments. 307.2 Section 307.2 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.2 Criteria...
13 CFR 307.2 - Criteria for Economic Adjustment Assistance Investments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Criteria for Economic Adjustment Assistance Investments. 307.2 Section 307.2 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.2 Criteria...
13 CFR 307.2 - Criteria for Economic Adjustment Assistance Investments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Criteria for Economic Adjustment Assistance Investments. 307.2 Section 307.2 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.2 Criteria...
27 CFR 24.304 - Chaptalization (Brix adjustment) and amelioration record.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Chaptalization (Brix adjustment) and amelioration record. 24.304 Section 24.304 Alcohol, Tobacco Products and Firearms ALCOHOL AND... Chaptalization (Brix adjustment) and amelioration record. (a) General. A proprietor who chaptalizes juice...
24 CFR 902.44 - Adjustment for physical condition and neighborhood environment.
Code of Federal Regulations, 2014 CFR
2014-04-01
... and neighborhood environment. 902.44 Section 902.44 Housing and Urban Development REGULATIONS RELATING... Operations Indicator § 902.44 Adjustment for physical condition and neighborhood environment. (a) General. In... environment factors are: (1) Physical condition adjustment applies to projects at least 28 years old, based...
24 CFR 902.44 - Adjustment for physical condition and neighborhood environment.
Code of Federal Regulations, 2012 CFR
2012-04-01
... and neighborhood environment. 902.44 Section 902.44 Housing and Urban Development REGULATIONS RELATING... Operations Indicator § 902.44 Adjustment for physical condition and neighborhood environment. (a) General. In... environment factors are: (1) Physical condition adjustment applies to projects at least 28 years old, based...
24 CFR 902.44 - Adjustment for physical condition and neighborhood environment.
Code of Federal Regulations, 2013 CFR
2013-04-01
... and neighborhood environment. 902.44 Section 902.44 Housing and Urban Development REGULATIONS RELATING... Operations Indicator § 902.44 Adjustment for physical condition and neighborhood environment. (a) General. In... environment factors are: (1) Physical condition adjustment applies to projects at least 28 years old, based...
24 CFR 902.44 - Adjustment for physical condition and neighborhood environment.
Code of Federal Regulations, 2011 CFR
2011-04-01
... and neighborhood environment. 902.44 Section 902.44 Housing and Urban Development REGULATIONS RELATING... Operations Indicator § 902.44 Adjustment for physical condition and neighborhood environment. (a) General. In... environment factors are: (1) Physical condition adjustment applies to projects at least 28 years old, based...
ADHD Symptomatology and Adjustment to College in China and the United States
ERIC Educational Resources Information Center
Norvilitis, Jill M.; Sun, Ling; Zhang, Jie
2010-01-01
This study examined ADHD symptomatology and college adjustment in 420 participants--147 from the United States and 273 from China. It was hypothesized that higher levels of ADHD symptoms in general and the inattentive symptom group in particular would be related to decreased academic and social adjustment, career decision-making self-efficacy, and…