Sample records for adjusted generalized linear

  1. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Bayes linear covariance matrix adjustment

    NASA Astrophysics Data System (ADS)

    Wilkinson, Darren J.

    1995-12-01

    In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.

  3. A General Linear Model Approach to Adjusting the Cumulative GPA.

    ERIC Educational Resources Information Center

    Young, John W.

    A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…

  4. Generalized adjustment by least squares ( GALS).

    USGS Publications Warehouse

    Elassal, A.A.

    1983-01-01

    The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author

  5. An evaluation of bias in propensity score-adjusted non-linear regression models.

    PubMed

    Wan, Fei; Mitra, Nandita

    2018-03-01

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  6. 39 CFR 3010.6 - Type 3 adjustment-in general.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... extraordinary circumstances. (b) An exigency-based rate adjustment is not subject to the inflation-based....6 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL REGULATION OF RATES FOR MARKET DOMINANT PRODUCTS General Provisions § 3010.6 Type 3 adjustment—in general. (a) A Type 3 rate adjustment is a...

  7. Superstrong Adjustable Permanent Magnet for a Linear Collider Final Focus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mihara, T.

    A superstrong permanent magnet quadrupole (PMQ) is one of the candidates for the final focus lens for the linear collider because of its compactness and low power consumption. The first fabricated prototype of our PMQ achieved a 300T/m superstrong field gradient with a 100mm overall magnet radius and a 7mm bore radius, but a drawback is its fixed strength. Therefore, a second prototype of PMQ, whose strength is adjustable, was fabricated. Its strength adjustability is based on the ''double ring structure'', rotating subdivided magnet slices separately. This second prototype is being tested. Some of the early results are presented.

  8. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  9. Linearity optimizations of analog ring resonator modulators through bias voltage adjustments

    NASA Astrophysics Data System (ADS)

    Hosseinzadeh, Arash; Middlebrook, Christopher T.

    2018-03-01

    The linearity of ring resonator modulator (RRM) in microwave photonic links is studied in terms of instantaneous bandwidth, fabrication tolerances, and operational bandwidth. A proposed bias voltage adjustment method is shown to maximize spur-free dynamic range (SFDR) at instantaneous bandwidths required by microwave photonic link (MPL) applications while also mitigating RRM fabrication tolerances effects. The proposed bias voltage adjustment method shows RRM SFDR improvement of ∼5.8 dB versus common Mach-Zehnder modulators at 500 MHz instantaneous bandwidth. Analyzing operational bandwidth effects on SFDR shows RRMs can be promising electro-optic modulators for MPL applications which require high operational frequencies while in a limited bandwidth such as radio-over-fiber 60 GHz wireless network access.

  10. Linearly Adjustable International Portfolios

    NASA Astrophysics Data System (ADS)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  11. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  12. Genetic parameters for racing records in trotters using linear and generalized linear models.

    PubMed

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  13. Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.

    ERIC Educational Resources Information Center

    Brant, Rollin

    Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…

  14. Methods, systems and apparatus for adjusting modulation index to improve linearity of phase voltage commands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallegos-Lopez, Gabriel; Perisic, Milun; Kinoshita, Michael H.

    2017-03-14

    Embodiments of the present invention relate to methods, systems and apparatus for controlling operation of a multi-phase machine in a motor drive system. The disclosed embodiments provide a mechanism for adjusting modulation index of voltage commands to improve linearity of the voltage commands.

  15. Using Linear and Non-Linear Temporal Adjustments to Align Multiple Phenology Curves, Making Vegetation Status and Health Directly Comparable

    NASA Astrophysics Data System (ADS)

    Hargrove, W. W.; Norman, S. P.; Kumar, J.; Hoffman, F. M.

    2017-12-01

    National-scale polar analysis of MODIS NDVI allows quantification of degree of seasonality expressed by local vegetation, and also selects the most optimum start/end of a local "phenological year" that is empirically customized for the vegetation that is growing at each location. Interannual differences in timing of phenology make direct comparisons of vegetation health and performance between years difficult, whether at the same or different locations. By "sliding" the two phenologies in time using a Procrustean linear time shift, any particular phenological event or "completion milestone" can be synchronized, allowing direct comparison of differences in timing of other remaining milestones. Going beyond a simple linear translation, time can be "rubber-sheeted," compressed or dilated. Considering one phenology curve to be a reference, the second phenology can be "rubber-sheeted" to fit that baseline as well as possible by stretching or shrinking time to match multiple control points, which can be any recognizable phenological events. Similar to "rubber sheeting" to georectify a map inside a GIS, rubber sheeting a phenology curve also yields a warping signature that shows at every time and every location how many days the adjusted phenology is ahead or behind the phenological development of the reference vegetation. Using such temporal methods to "adjust" phenologies may help to quantify vegetation impacts from frost, drought, wildfire, insects and diseases by permitting the most commensurate quantitative comparisons with unaffected vegetation.

  16. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  17. Linear spin-2 fields in most general backgrounds

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael

    2016-04-01

    We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.

  18. Adaptive adjustment of the generalization-discrimination balance in larval Drosophila.

    PubMed

    Mishra, Dushyant; Louis, Matthieu; Gerber, Bertram

    2010-09-01

    Learnt predictive behavior faces a dilemma: predictive stimuli will never 'replay' exactly as during the learning event, requiring generalization. In turn, minute differences can become meaningful, prompting discrimination. To provide a study case for an adaptive adjustment of this generalization-discrimination balance, the authors ask whether Drosophila melanogaster larvae are able to either generalize or discriminate between two odors (1-octen-3-ol and 3-octanol), depending on the task. The authors find that after discriminatively rewarding one but not the other odor, larvae show conditioned preference for the rewarded odor. On the other hand, no odor specificity is observed after nondiscriminative training, even if the test involves a choice between both odors. Thus, for this odor pair at least, discrimination training is required to confer an odor-specific memory trace. This requires that there is at least some difference in processing between the two odors already at the beginning of the training. Therefore, as a default, there is a small yet salient difference in processing between 1-octen-3-ol and 3-octanol; this difference is ignored after nondiscriminative training (generalization), whereas it is accentuated by odor-specific reinforcement (discrimination). Given that, as the authors show, both faculties are lost in anosmic Or83b(1) mutants, this indicates an adaptive adjustment of the generalization-discrimination balance in larval Drosophila, taking place downstream of Or83b-expressing sensory neurons.

  19. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  20. Method of Individual Adjustment for 3D CT Analysis: Linear Measurement.

    PubMed

    Kim, Dong Kyu; Choi, Dong Hun; Lee, Jeong Woo; Yang, Jung Dug; Chung, Ho Yun; Cho, Byung Chae; Choi, Kang Young

    2016-01-01

    Introduction . We aim to regularize measurement values in three-dimensional (3D) computed tomography (CT) reconstructed images for higher-precision 3D analysis, focusing on length-based 3D cephalometric examinations. Methods . We measure the linear distances between points on different skull models using Vernier calipers (real values). We use 10 differently tilted CT scans for 3D CT reconstruction of the models and measure the same linear distances from the picture archiving and communication system (PACS). In both cases, each measurement is performed three times by three doctors, yielding nine measurements. The real values are compared with the PACS values. Each PACS measurement is revised based on the display field of view (DFOV) values and compared with the real values. Results . The real values and the PACS measurement changes according to tilt value have no significant correlations ( p > 0.05). However, significant correlations appear between the real values and DFOV-adjusted PACS measurements ( p < 0.001). Hence, we obtain a correlation expression that can yield real physical values from PACS measurements. The DFOV value intervals for various age groups are also verified. Conclusion . Precise confirmation of individual preoperative length and precise analysis of postoperative improvements through 3D analysis is possible, which is helpful for facial-bone-surgery symmetry correction.

  1. A linear cavity multiwavelength fiber laser with adjustable lasing line number for fixed spectral regions

    NASA Astrophysics Data System (ADS)

    Tian, J. J.; Yao, Y.

    2011-03-01

    We report an experimental demonstration of muliwavelength erbium-doped fiber laser with adjustable wavelength number based on a power-symmetric nonlinear optical loop mirror (NOLM) in a linear cavity. The intensity-dependent loss (IDL) induced by the NOLM is used to suppress the mode competition and realize the stable multiwavelength oscillation. The controlling of the wavelength number is achieved by adjusting the strength of IDL, which is dependent on the pump power. As the pump power increases from 40 to 408 mW, 1-7 lasing line(s) at fixed wavelength around 1601 nm are obtained. The output power stability is also investigated. The most power fluctuation of single wavelength is less than 0.9 dB, when the wavelength number is increased from 1-7.

  2. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  3. An approximate generalized linear model with random effects for informative missing data.

    PubMed

    Follmann, D; Wu, M

    1995-03-01

    This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.

  4. The rotational feedback on linear-momentum balance in glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Hagedoorn, Jan

    2015-04-01

    The influence of changes in surface ice-mass redistribution and associated viscoelastic response of the Earth, known as glacial-isostatic adjustment (GIA), on the Earth's rotational dynamics has long been known. Equally important is the effect of the changes in the rotational dynamics on the viscoelastic deformation of the Earth. This signal, known as the rotational feedback, or more precisely, the rotational feedback on the sea-level equation, has been mathematically described by the sea-level equation extended for the term that is proportional to perturbation in the centrifugal potential and the second-degree tidal Love number. The perturbation in the centrifugal force due to changes in the Earth's rotational dynamics enters not only into the sea-level equation, but also into the conservation law of linear momentum such that the internal viscoelastic force, the perturbation in the gravitational force and the perturbation in the centrifugal force are in balance. Adding the centrifugal-force perturbation to the linear-momentum balance creates an additional rotational feedback on the viscoelastic deformations of the Earth. We term this feedback mechanism as the rotational feedback on the linear-momentum balance. We extend both the time-domain method for modelling the GIA response of laterally heterogeneous earth models and the traditional Laplace-domain method for modelling the GIA-induced rotational response to surface loading by considering the rotational feedback on linear-momentum balance. The correctness of the mathematical extensions of the methods is validated numerically by comparing the polar motion response to the GIA process and the rotationally-induced degree 2 and order 1 spherical harmonic component of the surface vertical displacement and gravity field. We present the difference between the case where the rotational feedback on linear-momentum balance is considered against that where it is not. Numerical simulations show that the resulting difference

  5. Analysis of Nonlinear Dynamics in Linear Compressors Driven by Linear Motors

    NASA Astrophysics Data System (ADS)

    Chen, Liangyuan

    2018-03-01

    The analysis of dynamic characteristics of the mechatronics system is of great significance for the linear motor design and control. Steady-state nonlinear response characteristics of a linear compressor are investigated theoretically based on the linearized and nonlinear models. First, the influence factors considering the nonlinear gas force load were analyzed. Then, a simple linearized model was set up to analyze the influence on the stroke and resonance frequency. Finally, the nonlinear model was set up to analyze the effects of piston mass, spring stiffness, driving force as an example of design parameter variation. The simulating results show that the stroke can be obtained by adjusting the excitation amplitude, frequency and other adjustments, the equilibrium position can be adjusted by adjusting the DC input, and to make the more efficient operation, the operating frequency must always equal to the resonance frequency.

  6. Should measures of patient experience in primary care be adjusted for case mix? Evidence from the English General Practice Patient Survey.

    PubMed

    Paddison, Charlotte; Elliott, Marc; Parker, Richard; Staetsky, Laura; Lyratzopoulos, Georgios; Campbell, John L; Roland, Martin

    2012-08-01

    Uncertainties exist about when and how best to adjust performance measures for case mix. Our aims are to quantify the impact of case-mix adjustment on practice-level scores in a national survey of patient experience, to identify why and when it may be useful to adjust for case mix, and to discuss unresolved policy issues regarding the use of case-mix adjustment in performance measurement in health care. Secondary analysis of the 2009 English General Practice Patient Survey. Responses from 2 163 456 patients registered with 8267 primary care practices. Linear mixed effects models were used with practice included as a random effect and five case-mix variables (gender, age, race/ethnicity, deprivation, and self-reported health) as fixed effects. Primary outcome was the impact of case-mix adjustment on practice-level means (adjusted minus unadjusted) and changes in practice percentile ranks for questions measuring patient experience in three domains of primary care: access; interpersonal care; anticipatory care planning, and overall satisfaction with primary care services. Depending on the survey measure selected, case-mix adjustment changed the rank of between 0.4% and 29.8% of practices by more than 10 percentile points. Adjusting for case-mix resulted in large increases in score for a small number of practices and small decreases in score for a larger number of practices. Practices with younger patients, more ethnic minority patients and patients living in more socio-economically deprived areas were more likely to gain from case-mix adjustment. Age and race/ethnicity were the most influential adjustors. While its effect is modest for most practices, case-mix adjustment corrects significant underestimation of scores for a small proportion of practices serving vulnerable patients and may reduce the risk that providers would 'cream-skim' by not enrolling patients from vulnerable socio-demographic groups.

  7. Linear discrete systems with memory: a generalization of the Langmuir model

    NASA Astrophysics Data System (ADS)

    Băleanu, Dumitru; Nigmatullin, Raoul R.

    2013-10-01

    In this manuscript we analyzed a general solution of the linear nonlocal Langmuir model within time scale calculus. Several generalizations of the Langmuir model are presented together with their exact corresponding solutions. The physical meaning of the proposed models are investigated and their corresponding geometries are reported.

  8. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  10. Gravitational Wave in Linear General Relativity

    NASA Astrophysics Data System (ADS)

    Cubillos, D. J.

    2017-07-01

    General relativity is the best theory currently available to describe the interaction due to gravity. Within Albert Einstein's field equations this interaction is described by means of the spatiotemporal curvature generated by the matter-energy content in the universe. Weyl worked on the existence of perturbations of the curvature of space-time that propagate at the speed of light, which are known as Gravitational Waves, obtained to a first approximation through the linearization of the field equations of Einstein. Weyl's solution consists of taking the field equations in a vacuum and disturbing the metric, using the Minkowski metric slightly perturbed by a factor ɛ greater than zero but much smaller than one. If the feedback effect of the field is neglected, it can be considered as a weak field solution. After introducing the disturbed metric and ignoring ɛ terms of order greater than one, we can find the linearized field equations in terms of the perturbation, which can then be expressed in terms of the Dalambertian operator of the perturbation equalized to zero. This is analogous to the linear wave equation in classical mechanics, which can be interpreted by saying that gravitational effects propagate as waves at the speed of light. In addition to this, by studying the motion of a particle affected by this perturbation through the geodesic equation can show the transversal character of the gravitational wave and its two possible states of polarization. It can be shown that the energy carried by the wave is of the order of 1/c5 where c is the speed of light, which explains that its effects on matter are very small and very difficult to detect.

  11. Implementing general quantum measurements on linear optical and solid-state qubits

    NASA Astrophysics Data System (ADS)

    Ota, Yukihiro; Ashhab, Sahel; Nori, Franco

    2013-03-01

    We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.

  12. Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de

    2015-06-28

    Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied,more » usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.« less

  13. Profile local linear estimation of generalized semiparametric regression model for longitudinal data.

    PubMed

    Sun, Yanqing; Sun, Liuquan; Zhou, Jie

    2013-07-01

    This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.

  14. The rotational feedback on linear-momentum balance in glacial isostatic adjustment

    NASA Astrophysics Data System (ADS)

    Martinec, Zdeněk; Hagedoorn, Jan

    2014-12-01

    The influence of changes in surface ice-mass redistribution and associated viscoelastic response of the Earth, known as glacial isostatic adjustment (GIA), on the Earth's rotational dynamics has long been known. Equally important is the effect of the changes in the rotational dynamics on the viscoelastic deformation of the Earth. This signal, known as the rotational feedback, or more precisely, the rotational feedback on the sea level equation, has been mathematically described by the sea level equation extended for the term that is proportional to perturbation in the centrifugal potential and the second-degree tidal Love number. The perturbation in the centrifugal force due to changes in the Earth's rotational dynamics enters not only into the sea level equation, but also into the conservation law of linear momentum such that the internal viscoelastic force, the perturbation in the gravitational force and the perturbation in the centrifugal force are in balance. Adding the centrifugal-force perturbation to the linear-momentum balance creates an additional rotational feedback on the viscoelastic deformations of the Earth. We term this feedback mechanism, which is studied in this paper, as the rotational feedback on the linear-momentum balance. We extend both the time-domain method for modelling the GIA response of laterally heterogeneous earth models developed by Martinec and the traditional Laplace-domain method for modelling the GIA-induced rotational response to surface loading by considering the rotational feedback on linear-momentum balance. The correctness of the mathematical extensions of the methods is validated numerically by comparing the polar-motion response to the GIA process and the rotationally induced degree 2 and order 1 spherical harmonic component of the surface vertical displacement and gravity field. We present the difference between the case where the rotational feedback on linear-momentum balance is considered against that where it is not

  15. Modelling female fertility traits in beef cattle using linear and non-linear models.

    PubMed

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  < 0.08 and r < 0.13, for linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  16. Brittle failure of rock: A review and general linear criterion

    NASA Astrophysics Data System (ADS)

    Labuz, Joseph F.; Zeng, Feitao; Makhnenko, Roman; Li, Yuan

    2018-07-01

    A failure criterion typically is phenomenological since few models exist to theoretically derive the mathematical function. Indeed, a successful failure criterion is a generalization of experimental data obtained from strength tests on specimens subjected to known stress states. For isotropic rock that exhibits a pressure dependence on strength, a popular failure criterion is a linear equation in major and minor principal stresses, independent of the intermediate principal stress. A general linear failure criterion called Paul-Mohr-Coulomb (PMC) contains all three principal stresses with three material constants: friction angles for axisymmetric compression ϕc and extension ϕe and isotropic tensile strength V0. PMC provides a framework to describe a nonlinear failure surface by a set of planes "hugging" the curved surface. Brittle failure of rock is reviewed and multiaxial test methods are summarized. Equations are presented to implement PMC for fitting strength data and determining the three material parameters. A piecewise linear approximation to a nonlinear failure surface is illustrated by fitting two planes with six material parameters to form either a 6- to 12-sided pyramid or a 6- to 12- to 6-sided pyramid. The particular nature of the failure surface is dictated by the experimental data.

  17. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  18. Should measures of patient experience in primary care be adjusted for case mix? Evidence from the English General Practice Patient Survey

    PubMed Central

    Paddison, Charlotte; Elliott, Marc; Parker, Richard; Staetsky, Laura; Lyratzopoulos, Georgios; Campbell, John L

    2012-01-01

    Objectives Uncertainties exist about when and how best to adjust performance measures for case mix. Our aims are to quantify the impact of case-mix adjustment on practice-level scores in a national survey of patient experience, to identify why and when it may be useful to adjust for case mix, and to discuss unresolved policy issues regarding the use of case-mix adjustment in performance measurement in health care. Design/setting Secondary analysis of the 2009 English General Practice Patient Survey. Responses from 2 163 456 patients registered with 8267 primary care practices. Linear mixed effects models were used with practice included as a random effect and five case-mix variables (gender, age, race/ethnicity, deprivation, and self-reported health) as fixed effects. Main outcome measures Primary outcome was the impact of case-mix adjustment on practice-level means (adjusted minus unadjusted) and changes in practice percentile ranks for questions measuring patient experience in three domains of primary care: access; interpersonal care; anticipatory care planning, and overall satisfaction with primary care services. Results Depending on the survey measure selected, case-mix adjustment changed the rank of between 0.4% and 29.8% of practices by more than 10 percentile points. Adjusting for case-mix resulted in large increases in score for a small number of practices and small decreases in score for a larger number of practices. Practices with younger patients, more ethnic minority patients and patients living in more socio-economically deprived areas were more likely to gain from case-mix adjustment. Age and race/ethnicity were the most influential adjustors. Conclusions While its effect is modest for most practices, case-mix adjustment corrects significant underestimation of scores for a small proportion of practices serving vulnerable patients and may reduce the risk that providers would ‘cream-skim’ by not enrolling patients from vulnerable socio

  19. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  20. The microcomputer scientific software series 2: general linear model--regression.

    Treesearch

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  1. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  2. Generalized self-adjustment method for statistical mechanics of composite materials

    NASA Astrophysics Data System (ADS)

    Pan'kov, A. A.

    1997-03-01

    A new method is developed for the statistical mechanics of composite materials — the generalized selfadjustment method — which makes it possible to reduce the problem of predicting effective elastic properties of composites with random structures to the solution of two simpler "averaged" problems of an inclusion with transitional layers in a medium with the desired effective elastic properties. The inhomogeneous elastic properties and dimensions of the transitional layers take into account both the "approximate" order of mutual positioning, and also the variation in the dimensions and elastics properties of inclusions through appropriate special averaged indicator functions of the random structure of the composite. A numerical calculation of averaged indicator functions and effective elastic characteristics is performed by the generalized self-adjustment method for a unidirectional fiberglass on the basis of various models of actual random structures in the plane of isotropy.

  3. Study on sampling of continuous linear system based on generalized Fourier transform

    NASA Astrophysics Data System (ADS)

    Li, Huiguang

    2003-09-01

    In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.

  4. Electromagnetic axial anomaly in a generalized linear sigma model

    NASA Astrophysics Data System (ADS)

    Fariborz, Amir H.; Jora, Renata

    2017-06-01

    We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.

  5. General linear methods and friends: Toward efficient solutions of multiphysics problems

    NASA Astrophysics Data System (ADS)

    Sandu, Adrian

    2017-07-01

    Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..

  6. Credibility analysis of risk classes by generalized linear model

    NASA Astrophysics Data System (ADS)

    Erdemir, Ovgucan Karadag; Sucu, Meral

    2016-06-01

    In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.

  7. A general theory of linear cosmological perturbations: bimetric theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagos, Macarena; Ferreira, Pedro G., E-mail: m.lagos13@imperial.ac.uk, E-mail: p.ferreira1@physics.ox.ac.uk

    2017-01-01

    We implement the method developed in [1] to construct the most general parametrised action for linear cosmological perturbations of bimetric theories of gravity. Specifically, we consider perturbations around a homogeneous and isotropic background, and identify the complete form of the action invariant under diffeomorphism transformations, as well as the number of free parameters characterising this cosmological class of theories. We discuss, in detail, the case without derivative interactions, and compare our results with those found in massive bigravity.

  8. Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.

    PubMed

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  9. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection.

    PubMed

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-08-28

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.

  10. Generalized Clifford Algebras as Algebras in Suitable Symmetric Linear Gr-Categories

    NASA Astrophysics Data System (ADS)

    Cheng, Tao; Huang, Hua-Lin; Yang, Yuping

    2016-01-01

    By viewing Clifford algebras as algebras in some suitable symmetric Gr-categories, Albuquerque and Majid were able to give a new derivation of some well known results about Clifford algebras and to generalize them. Along the same line, Bulacu observed that Clifford algebras are weak Hopf algebras in the aforementioned categories and obtained other interesting properties. The aim of this paper is to study generalized Clifford algebras in a similar manner and extend the results of Albuquerque, Majid and Bulacu to the generalized setting. In particular, by taking full advantage of the gauge transformations in symmetric linear Gr-categories, we derive the decomposition theorem and provide categorical weak Hopf structures for generalized Clifford algebras in a conceptual and simpler manner.

  11. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  12. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  13. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection

    PubMed Central

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-01-01

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671

  14. Designing a generalized soil-adjusted vegetation index (GESAVI)

    NASA Astrophysics Data System (ADS)

    Gilabert, M. A.; Gonzalez Piqueras, Jose; Garcia-Haro, Joan; Melia, J.

    1998-12-01

    Operational monitoring of vegetative cover by remote sensing currently involves the utilization of vegetation indices (VIs), most of them being functions of the reflectance in red (R) and near-infrared (NIR) spectral bands. A generalized soil-adjusted vegetation index (GESAVI), theoretically based on a simple vegetation canopy model, is introduced. It is defined in terms of the soil line parameters (A and B) as: GESAVI equals (NIR-BR-A)/(R + Z), where Z is related to the red reflectance at the cross point between the soil line and vegetation isolines. Z can be considered as a soil adjustment coefficient which let this new index be considered as belonging to the SAVI family. In order to analyze the GESAVI sensitivity to soil brightness and soil color, both high resolution reflectance data from two laboratory experiments and data obtained by applying a radiosity model to simulate heterogeneous vegetation canopy scenes were used. VIs (including GESAVI, NDVI, PVI and SAVI family VIs) were computed and their correlation with LAI for the different soil backgrounds was analyzed. Results confirmed the lower sensitivity of GESAVI to soil background in most of the cases, thus becoming the most efficient index. This good index performance results from the fact that the isolines in the NIR-R plane are neither parallel to the soil line (as required by the PVI) nor convergent at the origin (as required by the NDVI) but they converge somewhere between the origin and infinity in the region of negative values of both NIR and R. This convergence point is not necessarily situated on the bisectrix, as required by other SAVI family indices.

  15. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  16. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    PubMed

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  17. Local influence for generalized linear models with missing covariates.

    PubMed

    Shi, Xiaoyan; Zhu, Hongtu; Ibrahim, Joseph G

    2009-12-01

    In the analysis of missing data, sensitivity analyses are commonly used to check the sensitivity of the parameters of interest with respect to the missing data mechanism and other distributional and modeling assumptions. In this article, we formally develop a general local influence method to carry out sensitivity analyses of minor perturbations to generalized linear models in the presence of missing covariate data. We examine two types of perturbation schemes (the single-case and global perturbation schemes) for perturbing various assumptions in this setting. We show that the metric tensor of a perturbation manifold provides useful information for selecting an appropriate perturbation. We also develop several local influence measures to identify influential points and test model misspecification. Simulation studies are conducted to evaluate our methods, and real datasets are analyzed to illustrate the use of our local influence measures.

  18. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    PubMed Central

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786

  19. The use of generalized linear models and generalized estimating equations in bioarchaeological studies.

    PubMed

    Nikita, Efthymia

    2014-03-01

    The current article explores whether the application of generalized linear models (GLM) and generalized estimating equations (GEE) can be used in place of conventional statistical analyses in the study of ordinal data that code an underlying continuous variable, like entheseal changes. The analysis of artificial data and ordinal data expressing entheseal changes in archaeological North African populations gave the following results. Parametric and nonparametric tests give convergent results particularly for P values <0.1, irrespective of whether the underlying variable is normally distributed or not under the condition that the samples involved in the tests exhibit approximately equal sizes. If this prerequisite is valid and provided that the samples are of equal variances, analysis of covariance may be adopted. GLM are not subject to constraints and give results that converge to those obtained from all nonparametric tests. Therefore, they can be used instead of traditional tests as they give the same amount of information as them, but with the advantage of allowing the study of the simultaneous impact of multiple predictors and their interactions and the modeling of the experimental data. However, GLM should be replaced by GEE for the study of bilateral asymmetry and in general when paired samples are tested, because GEE are appropriate for correlated data. Copyright © 2013 Wiley Periodicals, Inc.

  20. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  1. Generalized prolate spheroidal wave functions for optical finite fractional Fourier and linear canonical transforms.

    PubMed

    Pei, Soo-Chang; Ding, Jian-Jiun

    2005-03-01

    Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.

  2. General job stress: a unidimensional measure and its non-linear relations with outcome variables.

    PubMed

    Yankelevich, Maya; Broadfoot, Alison; Gillespie, Jennifer Z; Gillespie, Michael A; Guidroz, Ashley

    2012-04-01

    This article aims to examine the non-linear relations between a general measure of job stress [Stress in General (SIG)] and two outcome variables: intentions to quit and job satisfaction. In so doing, we also re-examine the factor structure of the SIG and determine that, as a two-factor scale, it obscures non-linear relations with outcomes. Thus, in this research, we not only test for non-linear relations between stress and outcome variables but also present an updated version of the SIG scale. Using two distinct samples of working adults (sample 1, N = 589; sample 2, N = 4322), results indicate that a more parsimonious eight-item SIG has better model-data fit than the 15-item two-factor SIG and that the eight-item SIG has non-linear relations with job satisfaction and intentions to quit. Specifically, the revised SIG has an inverted curvilinear J-shaped relation with job satisfaction such that job satisfaction drops precipitously after a certain level of stress; the SIG has a J-shaped curvilinear relation with intentions to quit such that turnover intentions increase exponentially after a certain level of stress. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Approximated adjusted fractional Bayes factors: A general method for testing informative hypotheses.

    PubMed

    Gu, Xin; Mulder, Joris; Hoijtink, Herbert

    2018-05-01

    Informative hypotheses are increasingly being used in psychological sciences because they adequately capture researchers' theories and expectations. In the Bayesian framework, the evaluation of informative hypotheses often makes use of default Bayes factors such as the fractional Bayes factor. This paper approximates and adjusts the fractional Bayes factor such that it can be used to evaluate informative hypotheses in general statistical models. In the fractional Bayes factor a fraction parameter must be specified which controls the amount of information in the data used for specifying an implicit prior. The remaining fraction is used for testing the informative hypotheses. We discuss different choices of this parameter and present a scheme for setting it. Furthermore, a software package is described which computes the approximated adjusted fractional Bayes factor. Using this software package, psychological researchers can evaluate informative hypotheses by means of Bayes factors in an easy manner. Two empirical examples are used to illustrate the procedure. © 2017 The British Psychological Society.

  4. The general linear inverse problem - Implication of surface waves and free oscillations for earth structure.

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.

  5. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, Antoine; Edwards, T.C.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  6. A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.

    We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ''Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbationsmore » that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.« less

  7. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  8. Consistent linearization of the element-independent corotational formulation for the structural analysis of general shells

    NASA Technical Reports Server (NTRS)

    Rankin, C. C.

    1988-01-01

    A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.

  9. Article mounting and position adjustment stage

    DOEpatents

    Cutburth, R.W.; Silva, L.L.

    1988-05-10

    An improved adjustment and mounting stage of the type used for the detection of laser beams is disclosed. A ring sensor holder has locating pins on a first side thereof which are positioned within a linear keyway in a surrounding housing for permitting reciprocal movement of the ring along the keyway. A rotatable ring gear is positioned within the housing on the other side of the ring from the linear keyway and includes an oval keyway which drives the ring along the linear keyway upon rotation of the gear. Motor-driven single-stage and dual (x, y) stage adjustment systems are disclosed which are of compact construction and include a large laser transmission hole. 6 figs.

  10. Article mounting and position adjustment stage

    DOEpatents

    Cutburth, Ronald W.; Silva, Leonard L.

    1988-01-01

    An improved adjustment and mounting stage of the type used for the detection of laser beams is disclosed. A ring sensor holder has locating pins on a first side thereof which are positioned within a linear keyway in a surrounding housing for permitting reciprocal movement of the ring along the keyway. A rotatable ring gear is positioned within the housing on the other side of the ring from the linear keyway and includes an oval keyway which drives the ring along the linear keyway upon rotation of the gear. Motor-driven single-stage and dual (x, y) stage adjustment systems are disclosed which are of compact construction and include a large laser transmission hole.

  11. A study of the linear free energy model for DNA structures using the generalized Hamiltonian formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yavari, M., E-mail: yavari@iaukashan.ac.ir

    2016-06-15

    We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.

  12. Non-linear regime of the Generalized Minimal Massive Gravity in critical points

    NASA Astrophysics Data System (ADS)

    Setare, M. R.; Adami, H.

    2016-03-01

    The Generalized Minimal Massive Gravity (GMMG) theory is realized by adding the CS deformation term, the higher derivative deformation term, and an extra term to pure Einstein gravity with a negative cosmological constant. In the present paper we obtain exact solutions to the GMMG field equations in the non-linear regime of the model. GMMG model about AdS_3 space is conjectured to be dual to a 2-dimensional CFT. We study the theory in critical points corresponding to the central charges c_-=0 or c_+=0, in the non-linear regime. We show that AdS_3 wave solutions are present, and have logarithmic form in critical points. Then we study the AdS_3 non-linear deformation solution. Furthermore we obtain logarithmic deformation of extremal BTZ black hole. After that using Abbott-Deser-Tekin method we calculate the energy and angular momentum of these types of black hole solutions.

  13. Equivalence between a generalized dendritic network and a set of one-dimensional networks as a ground of linear dynamics.

    PubMed

    Koda, Shin-ichi

    2015-05-28

    It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.

  14. Generalized functional linear models for gene-based case-control association studies.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  15. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  16. On homogeneous second order linear general quantum difference equations.

    PubMed

    Faried, Nashat; Shehata, Enas M; El Zafarani, Rasha M

    2017-01-01

    In this paper, we prove the existence and uniqueness of solutions of the β -Cauchy problem of second order β -difference equations [Formula: see text] [Formula: see text], in a neighborhood of the unique fixed point [Formula: see text] of the strictly increasing continuous function β , defined on an interval [Formula: see text]. These equations are based on the general quantum difference operator [Formula: see text], which is defined by [Formula: see text], [Formula: see text]. We also construct a fundamental set of solutions for the second order linear homogeneous β -difference equations when the coefficients are constants and study the different cases of the roots of their characteristic equations. Finally, we drive the Euler-Cauchy β -difference equation.

  17. 34 CFR 668.208 - General requirements for adjusting official cohort default rates and for appealing their...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false General requirements for adjusting official cohort default rates and for appealing their consequences. 668.208 Section 668.208 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF...

  18. 34 CFR 668.189 - General requirements for adjusting official cohort default rates and for appealing their...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false General requirements for adjusting official cohort default rates and for appealing their consequences. 668.189 Section 668.189 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF...

  19. Drift tube suspension for high intensity linear accelerators

    DOEpatents

    Liska, D.J.; Schamaun, R.G.; Clark, D.C.; Potter, R.C.; Frank, J.A.

    1980-03-11

    The disclosure relates to a drift tube suspension for high intensity linear accelerators. The system comprises a series of box-sections girders independently adjustably mounted on a linear accelerator. A plurality of drift tube holding stems are individually adjustably mounted on each girder.

  20. Geostrophic adjustment in a shallow-water numerical model as it relates to thermospheric dynamics

    NASA Technical Reports Server (NTRS)

    Larsen, M. F.; Mikkelsen, I. S.

    1986-01-01

    The theory of geostrophic adjustment and its application to the dynamics of the high latitude thermosphere have been discussed in previous papers based on a linearized treatment of the fluid dynamical equations. However, a linearized treatment is only valid for small Rossby numbers given by Ro = V/fL, where V is the wind speed, f is the local value of the Coriolis parameter, and L is a characteristic horizontal scale for the flow. For typical values in the auroral zone, the approximation is not reasonable for wind speeds greater than 25 m/s or so. A shallow-water (one layer) model was developed that includes the spherical geometry and full nonlinear dynamics in the momentum equations in order to isolate the effects of the nonlinearities on the adjustment process. A belt of accelerated winds between 60 deg and 70 deg latitude was used as the initial condition. The adjustment process was found to proceed as expected from the linear formulation, but that an asymmetry between the response for an eastward and westward flow results from the nonlineawr curvature (centrifugal) terms. In general, the amplitude of an eastward flowing wind will be less after adjustment than a westward wind. For instance, if the initial wind velocity is 300 m/s, the linearized theory predicts a final wind speed of 240 m/s, regardless of the flow direction. However, the nonlinear curvature terms modify the response and produce a final wind speed of only 200 m/s for an initial eastward wind and a final wind speed of almost 300 m/s for an initial westward flow direction. Also, less gravity wave energy is produced by the adjustment of the westward flow than by the adjustment of the eastward flow. The implications are that the response of the thermosphere should be significantly different on the dawn and dusk sides of the auroral oval. Larger flow velocities would be expected on the dusk side since the plasma will accelerate the flow in a westward direction in that sector.

  1. Commensurate Priors for Incorporating Historical Information in Clinical Trials Using General and Generalized Linear Models

    PubMed Central

    Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.

    2014-01-01

    Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786

  2. Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.

    PubMed

    Jiang, Yuan; He, Yunxiao; Zhang, Heping

    LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.

  3. Capelli bitableaux and Z-forms of general linear Lie superalgebras.

    PubMed Central

    Brini, A; Teolis, A G

    1990-01-01

    The combinatorics of the enveloping algebra UQ(pl(L)) of the general linear Lie superalgebra of a finite dimensional Z2-graded Q-vector space is studied. Three non-equivalent Z-forms of UQ(pl(L)) are introduced: one of these Z-forms is a version of the Kostant Z-form and the others are Lie algebra analogs of Rota and Stein's straightening formulae for the supersymmetric algebra Super[L P] and for its dual Super[L* P*]. The method is based on an extension of Capelli's technique of variabili ausiliarie to algebras containing positively and negatively signed elements. PMID:11607048

  4. A study of school adjustment, self-concept, self-esteem, general wellbeing and parent child relationship in Juvenile Idiopathic Arthritis.

    PubMed

    Yadav, Anita; Yadav, T P

    2013-03-01

    To assess school adjustment, self-concept, self-esteem, general wellbeing and parent-child relationship in children with Juvenile Idiopathic Arthritis (JIA)and to study the correlation of these parameters with chronicity of disease, number of active joints, laboratory parameters of disease activity and JIA subtypes. A total of 64 children (32 cases and 32 controls) were recruited for analysis. Self report questionnaires which included PGI General Wellbeing Measure, Adjustment Inventory for School Students, Parent Child Relationship Scale, Self Esteem Inventory and Self Concept Questionnaires were used to assess all the enrolled subjects. Cases had significantly lower general physical well being (p < 0.001), self- esteem (p = 0.039), social self-concept (p = 0.023) and poorer social (p = 0.002), educational (p = 0.002) and overall (p = 0.006) adjustment as compared to controls. Both parents of cases were significantly more demanding (p = 0.028, p = 0.004)and mothers were over protective (p = 0.009) and pampering with object rewards (p = 0.02). PGI wellbeing score (p = 0.042, p = 0.019) and self concept (p = 0.002, for social SCQ p = 0.030) correlated well with number of active joints and ESR. As the disease duration increased, fathers tended to neglect their children (p = 0.043) and with persistent disease activity (reflected by CRP positivity) even resorted to punishment (p = 0.022) or remained indifferent (p = 0.048). JIA significantly hampers the child's self-esteem, self-concept, adjustment in school, general wellbeing and evokes disturbed parent-child relationship.

  5. Application of conditional moment tests to model checking for generalized linear models.

    PubMed

    Pan, Wei

    2002-06-01

    Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.

  6. 19 CFR 201.205 - Salary adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's election...

  7. 19 CFR 201.205 - Salary adjustments.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 3 2014-04-01 2014-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's election...

  8. 19 CFR 201.205 - Salary adjustments.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 3 2011-04-01 2011-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's election...

  9. 19 CFR 201.205 - Salary adjustments.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 3 2013-04-01 2013-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's election...

  10. 19 CFR 201.205 - Salary adjustments.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 3 2012-04-01 2012-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's election...

  11. Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.

    PubMed

    Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen

    2015-05-01

    Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.

  12. Adjusting for Health Status in Non-Linear Models of Health Care Disparities

    PubMed Central

    Cook, Benjamin L.; McGuire, Thomas G.; Meara, Ellen; Zaslavsky, Alan M.

    2009-01-01

    This article compared conceptual and empirical strengths of alternative methods for estimating racial disparities using non-linear models of health care access. Three methods were presented (propensity score, rank and replace, and a combined method) that adjust for health status while allowing SES variables to mediate the relationship between race and access to care. Applying these methods to a nationally representative sample of blacks and non-Hispanic whites surveyed in the 2003 and 2004 Medical Expenditure Panel Surveys (MEPS), we assessed the concordance of each of these methods with the Institute of Medicine (IOM) definition of racial disparities, and empirically compared the methods' predicted disparity estimates, the variance of the estimates, and the sensitivity of the estimates to limitations of available data. The rank and replace and combined methods (but not the propensity score method) are concordant with the IOM definition of racial disparities in that each creates a comparison group with the appropriate marginal distributions of health status and SES variables. Predicted disparities and prediction variances were similar for the rank and replace and combined methods, but the rank and replace method was sensitive to limitations on SES information. For all methods, limiting health status information significantly reduced estimates of disparities compared to a more comprehensive dataset. We conclude that the two IOM-concordant methods were similar enough that either could be considered in disparity predictions. In datasets with limited SES information, the combined method is the better choice. PMID:20352070

  13. Robust root clustering for linear uncertain systems using generalized Lyapunov theory

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1993-01-01

    Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.

  14. Linear and nonlinear dynamics of isospectral granular chains

    NASA Astrophysics Data System (ADS)

    Chaunsali, R.; Xu, H.; Yang, J.; Kevrekidis, P. G.

    2017-04-01

    We study the dynamics of isospectral granular chains that are highly tunable due to the nonlinear Hertz contact law interaction between the granular particles. The system dynamics can thus be tuned easily from being linear to strongly nonlinear by adjusting the initial compression applied to the chain. In particular, we introduce both discrete and continuous spectral transformation schemes to generate a family of granular chains that are isospectral in their linear limit. Inspired by the principle of supersymmetry in quantum systems, we also introduce a methodology to add or remove certain eigenfrequencies, and we demonstrate numerically that the corresponding physical system can be constructed in the setting of one-dimensional granular crystals. In the linear regime, we highlight the similarities in the elastic wave transmission characteristics of such isospectral systems, and emphasize that the presented mathematical framework allows one to suitably tailor the wave transmission through a general class of granular chains, both ordered and disordered. Moreover, we show how the dynamic response of these structures deviates from its linear limit as we introduce Hertzian nonlinearity in the chain and how nonlinearity breaks the notion of linear isospectrality.

  15. Permutation inference for the general linear model

    PubMed Central

    Winkler, Anderson M.; Ridgway, Gerard R.; Webster, Matthew A.; Smith, Stephen M.; Nichols, Thomas E.

    2014-01-01

    Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (glms) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on glm parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm – the “randomise” algorithm – for permutation inference with the glm. PMID:24530839

  16. [Adjustment disorders with anxiety. Clinical and psychometric characteristics in patients consulting a general practitioner].

    PubMed

    Servant, D; Pelissolo, A; Chancharme, L; Le Guern, M-E; Boulenger, J-P

    2013-10-01

    The DSM-IV and ICD-10 descriptions of adjustment disorders are broadly similar. Their main features are the following: the symptoms arise in response to a stressful event; the onset of symptoms is within 3 months (DSM-IV) or 1 month (ICD-10) of exposure to the stressor; the symptoms must be clinically significant, in that they are distressing and in excess of what would be expected by exposure to the stressor and/or there is significant impairment in social or occupational functioning (the latter is mandatory in ICD-10); the symptoms are not due to another axis I disorder (or bereavement in DSM-IV); the symptoms resolve within 6 months, once the stressor or its consequences are removed. Adjustment disorders are divided into subgroups based on the dominant symptoms of anxiety, depression or behaviour. Adjustment disorder with anxiety (ADA) is a very common diagnosis in primary care, liaison and general psychiatry services but we still lack data about its specificity as a clinical entity. Current classifications fail to provide guidance on distinguishing these disorders from normal adaptive reactions to stress. Ninety-seven patients with ADA according DSM-IV were recruited in this primary care study and compared with 30 control subjects matched for age and sex. The diagnosis was made according to the MINI questionnaire completed with a standardized research of stressful events and an assessment of anxiety symptoms using different scales: the Hamilton Anxiety rating Scale (HAM-A), the Hospital Anxiety and Depression scale (HAD), The Penn-State Worry Questionnaire (PSWQ), the Positive and Negative Emotionality scale, 31 items (EPN-31 scale) and the State-Trait Anxiety Inventory (STAI-S). Life events in relation to work were the most frequent (43%). In terms of symptomatology, results showed that ADA is associated with a level of anxiety close to those obtained in other anxiety disorders, particularly GAD, in relation to general symptoms (physical and somatic) as well

  17. Optimizing the general linear model for functional near-infrared spectroscopy: an adaptive hemodynamic response function approach

    PubMed Central

    Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju

    2014-01-01

    Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973

  18. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  19. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  20. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  1. Parental alcohol use and adolescent school adjustment in the general population: Results from the HUNT study

    PubMed Central

    2011-01-01

    Background This study investigates the relationship between parental drinking and school adjustment in a total population sample of adolescents, with independent reports from mothers, fathers, and adolescents. As a group, children of alcohol abusers have previously been found to exhibit lowered academic achievement. However, few studies address which parts of school adjustment that may be impaired. Both a genetic approach and social strains predict elevated problem scores in these children. Previous research has had limitations such as only recruiting cases from clinics, relying on single responders for all measures, or incomplete control for comorbid psychopathology. The specific effects of maternal and paternal alcohol use are also understudied. Methods In a Norwegian county, 88% of the population aged 13-19 years participated in a health survey (N = 8984). Among other variables, adolescents reported on four dimensions of school adjustment, while mothers and fathers reported their own drinking behaviour. Mental distress and other control variables were adjusted for. Multivariate analysis including generalized estimation equations was applied to investigate associations. Results Compared to children of light drinkers, children of alcohol abusers had moderately elevated attention and conduct problem scores. Maternal alcohol abuse was particularly predictive of such problems. Children of abstainers did significantly better than children of light drinkers. Controlling for adolescent mental distress reduced the association between maternal abuse and attention problems. The associations between parental reported drinking and school adjustment were further reduced when controlling for the children's report of seeing their parents drunk, which itself predicted school adjustment. Controlling for parental mental distress did not reduce the associations. Conclusions Parental alcohol abuse is an independent risk factor for attention and conduct problems at school. Some of

  2. Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2013-01-01

    Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048

  3. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    PubMed

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy.

    PubMed

    Huppert, Theodore J

    2016-01-01

    Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.

  5. Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, J.G.

    2011-07-01

    A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less

  6. Linear relations in microbial reaction systems: a general overview of their origin, form, and use.

    PubMed

    Noorman, H J; Heijnen, J J; Ch A M Luyben, K

    1991-09-01

    In microbial reaction systems, there are a number of linear relations among net conversion rates. These can be very useful in the analysis of experimental data. This article provides a general approach for the formation and application of the linear relations. Two type of system descriptions, one considering the biomass as a black box and the other based on metabolic pathways, are encountered. These are defined in a linear vector and matrix algebra framework. A correct a priori description can be obtained by three useful tests: the independency, consistency, and observability tests. The independency are different. The black box approach provides only conservations relations. They are derived from element, electrical charge, energy, and Gibbs energy balances. The metabolic approach provides, in addition to the conservation relations, metabolic and reaction relations. These result from component, energy, and Gibbs energy balances. Thus it is more attractive to use the metabolic description than the black box approach. A number of different types of linear relations given in the literature are reviewed. They are classified according to the different categories that result from the black box or the metabolic system description. Validation of hypotheses related to metabolic pathways can be supported by experimental validation of the linear metabolic relations. However, definite proof from biochemical evidence remains indispensable.

  7. Is involvement in school bullying associated with general health and psychosocial adjustment outcomes in adulthood?

    PubMed

    Sigurdson, J F; Wallander, J; Sund, A M

    2014-10-01

    The aim was to examine prospectively associations between bullying involvement at 14-15 years of age and self-reported general health and psychosocial adjustment in young adulthood, at 26-27 years of age. A large representative sample (N=2,464) was recruited and assessed in two counties in Mid-Norway in 1998 (T1) and 1999/2000 (T2) when the respondents had a mean age of 13.7 and 14.9, respectively, leading to classification as being bullied, bully-victim, being aggressive toward others or non-involved. Information about general health and psychosocial adjustment was gathered at a follow-up in 2012 (T4) (N=1,266) with a respondent mean age of 27.2. Logistic regression and ANOVA analyses showed that groups involved in bullying of any type in adolescence had increased risk for lower education as young adults compared to those non-involved. The group aggressive toward others also had a higher risk of being unemployed and receiving any kind of social help. Compared with the non-involved, those being bullied and bully-victims had increased risk of poor general health and high levels of pain. Bully-victims and those aggressive toward others during adolescence subsequently had increased risk of tobacco use and lower job functioning than non-involved. Further, those being bullied and aggressive toward others had increased risk of illegal drug use. Relations to live-in spouse/partner were poorer among those being bullied. Involvement in bullying, either as victim or perpetrator, has significant social costs even 12 years after the bullying experience. Accordingly, it will be important to provide early intervention for those involved in bullying in adolescence. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. 26 CFR 1.9001-2 - Basis adjustments for taxable years beginning on or after 1956 adjustment date.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... on or after 1956 adjustment date. 1.9001-2 Section 1.9001-2 Internal Revenue INTERNAL REVENUE SERVICE....9001-2 Basis adjustments for taxable years beginning on or after 1956 adjustment date. (a) In general. Subsection (d) of the Act provides the basis adjustments required to be made by the taxpayer as of the 1956...

  10. General theories of linear gravitational perturbations to a Schwarzschild black hole

    NASA Astrophysics Data System (ADS)

    Tattersall, Oliver J.; Ferreira, Pedro G.; Lagos, Macarena

    2018-02-01

    We use the covariant formulation proposed by Tattersall, Lagos, and Ferreira [Phys. Rev. D 96, 064011 (2017), 10.1103/PhysRevD.96.064011] to analyze the structure of linear perturbations about a spherically symmetric background in different families of gravity theories, and hence study how quasinormal modes of perturbed black holes may be affected by modifications to general relativity. We restrict ourselves to single-tensor, scalar-tensor and vector-tensor diffeomorphism-invariant gravity models in a Schwarzschild black hole background. We show explicitly the full covariant form of the quadratic actions in such cases, which allow us to then analyze odd parity (axial) and even parity (polar) perturbations simultaneously in a straightforward manner.

  11. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  12. A generalized fuzzy linear programming approach for environmental management problem under uncertainty.

    PubMed

    Fan, Yurui; Huang, Guohe; Veawab, Amornvadee

    2012-01-01

    In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.

  13. Methodological quality and reporting of generalized linear mixed models in clinical medicine (2000-2012): a systematic review.

    PubMed

    Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L

    2014-01-01

    Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the

  14. 13 CFR 315.16 - Adjustment proposal requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF COMMERCE TRADE ADJUSTMENT ASSISTANCE FOR FIRMS Adjustment Proposals § 315.16 Adjustment proposal... reasonably calculated to contribute materially to the economic adjustment of the Firm (i.e., that such... generally consists of knowledge-based services such as market penetration studies, customized business...

  15. Intensity Mapping Foreground Cleaning with Generalized Needlet Internal Linear Combination

    NASA Astrophysics Data System (ADS)

    Olivari, L. C.; Remazeilles, M.; Dickinson, C.

    2018-05-01

    Intensity mapping (IM) is a new observational technique to survey the large-scale structure of matter using spectral emission lines. IM observations are contaminated by instrumental noise and astrophysical foregrounds. The foregrounds are at least three orders of magnitude larger than the searched signals. In this work, we apply the Generalized Needlet Internal Linear Combination (GNILC) method to subtract radio foregrounds and to recover the cosmological HI and CO signals within the IM context. For the HI IM case, we find that GNILC can reconstruct the HI plus noise power spectra with 7.0% accuracy for z = 0.13 - 0.48 (960 - 1260 MHz) and l <~ 400, while for the CO IM case, we find that it can reconstruct the CO plus noise power spectra with 6.7% accuracy for z = 2.4 - 3.4 (26 - 34 GHz) and l <~ 3000.

  16. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  17. Making reasonable and achievable adjustments: the contributions of learning disability liaison nurses in 'Getting it right' for people with learning disabilities receiving general hospitals care.

    PubMed

    MacArthur, Juliet; Brown, Michael; McKechanie, Andrew; Mack, Siobhan; Hayes, Matthew; Fletcher, Joan

    2015-07-01

    To examine the role of learning disability liaison nurses in facilitating reasonable and achievable adjustments to support access to general hospital services for people with learning disabilities. Mixed methods study involving four health boards in Scotland with established Learning Disability Liaison Nurses (LDLN) Services. Quantitative data of all liaison nursing referrals over 18 months and qualitative data collected from stakeholders with experience of using the liaison services within the previous 3-6 months. Six liaison nurses collected quantitative data of 323 referrals and activity between September 2008-March 2010. Interviews and focus groups were held with 85 participants included adults with learning disabilities (n = 5), carers (n = 16), primary care (n = 39), general hospital (n = 19) and liaison nurses (n = 6). Facilitating reasonable and achievable adjustments was an important element of the LDLNs' role and focussed on access to information; adjustments to care; appropriate environment of care; ensuring equitable care; identifying patient need; meeting patient needs; and specialist tools/resources. Ensuring that reasonable adjustments are made in the general hospital setting promotes person-centred care and equal health outcomes for people with a learning disability. This view accords with 'Getting it right' charter produced by the UK Charity Mencap which argues that healthcare professionals need support, encouragement and guidance to make reasonable adjustments for this group. LDLNs have an important and increasing role to play in advising on and establishing adjustments that are both reasonable and achievable. © 2015 John Wiley & Sons Ltd.

  18. Arginine intake is associated with oxidative stress in a general population.

    PubMed

    Carvalho, Aline Martins de; Oliveira, Antonio Anax Falcão de; Loureiro, Ana Paula de Melo; Gattás, Gilka Jorge Figaro; Fisberg, Regina Mara; Marchioni, Dirce Maria

    2017-01-01

    The aim of this study was to assess the association between protein and arginine from meat intake and oxidative stress in a general population. Data came from the Health Survey for Sao Paulo (ISA-Capital), a cross-sectional population-based study in Brazil (N = 549 adults). Food intake was estimated by a 24-h dietary recall. Oxidative stress was estimated by malondialdehyde (MDA) concentration in plasma. Analyses were performed using general linear regression models adjusted for some genetic, lifestyle, and biochemical confounders. MDA levels were associated with meat intake (P for linear trend = 0.031), protein from meat (P for linear trend = 0.006), and arginine from meat (P for linear trend = 0.044) after adjustments for confounders: age, sex, body mass index, smoking, physical activity, intake of fruit and vegetables, energy and heterocyclic amines, C-reactive protein levels, and polymorphisms in GSTM1 (glutathione S-transferase Mu 1) and GSTT1 (glutathione S-transferase theta 1) genes. Results were not significant for total protein and protein from vegetable intake (P > 0.05). High protein and arginine from meat intake were associated with oxidative stress independently of genetic, lifestyle, and biochemical confounders in a population-based study. Our results suggested a novel link between high protein/arginine intake and oxidative stress, which is a major cause of age-related diseases. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. A methodology for evaluation of parent-mutant competition using a generalized non-linear ecosystem model

    Treesearch

    Raymond L. Czaplewski

    1973-01-01

    A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...

  20. School-related adjustment in children and adolescents with CHD.

    PubMed

    Im, Yu-Mi; Lee, Sunhee; Yun, Tae-Jin; Choi, Jae Young

    2017-09-01

    Advancements in medical and surgical treatment have increased the life expectancy of patients with CHD. Many patients with CHD, however, struggle with the medical, psychosocial, and behavioural challenges as they transition from childhood to adulthood. Specifically, the environmental and lifestyle challenges in school are very important factors that affect children and adolescents with CHD. This study aimed to evaluate school-related adjustments depending on school level and disclosure of disease in children and adolescents with CHD. This was a descriptive and exploratory study with 205 children and adolescents, aged 7-18 years, who were recruited from two congenital heart clinics from 5 January to 27 February, 2015. Data were analysed using the Student's t-test, analysis of variance, and a univariate general linear model. School-related adjustment scores were significantly different according to school level and disclosure of disease (p<0.001) when age, religion, experience being bullied, and parents' educational levels were assigned as covariates. The school-related adjustment score of patients who did not disclose their disease dropped significantly in high school. This indicated that it is important for healthcare providers to plan developmentally appropriate educational transition programmes for middle-school students with CHD in order for students to prepare themselves before entering high school.

  1. A simple and exploratory way to determine the mean-variance relationship in generalized linear models.

    PubMed

    Tsou, Tsung-Shan

    2007-03-30

    This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.

  2. A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.

    PubMed

    Mihalaş, Stefan; Niebur, Ernst

    2009-03-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.

  3. A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors

    PubMed Central

    Mihalaş, Ştefan; Niebur, Ernst

    2010-01-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368

  4. 45 CFR 153.365 - General oversight requirements for State-operated risk adjustment programs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... risk adjustment programs. 153.365 Section 153.365 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS STANDARDS RELATED TO REINSURANCE, RISK CORRIDORS, AND RISK ADJUSTMENT UNDER THE AFFORDABLE CARE ACT State Standards Related to the Risk Adjustment Program...

  5. 38 CFR 10.0 - Adjusted service pay entitlements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Adjusted service pay... COMPENSATION Adjusted Compensation; General § 10.0 Adjusted service pay entitlements. A veteran entitled to adjusted service pay is one whose adjusted service credit does not amount to more than $50 as distinguished...

  6. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    EPA Science Inventory

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  7. Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) Collocation Method for Solving Linear and Nonlinear Fokker-Planck Equations

    NASA Astrophysics Data System (ADS)

    Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.

    2018-05-01

    In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.

  8. Bayesian Inference for Generalized Linear Models for Spiking Neurons

    PubMed Central

    Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias

    2010-01-01

    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627

  9. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    PubMed

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  10. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    PubMed

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  11. The Ostomy Adjustment Scale: translation into Norwegian language with validation and reliability testing.

    PubMed

    Indrebø, Kirsten Lerum; Andersen, John Roger; Natvig, Gerd Karin

    2014-01-01

    The purpose of this study was to adapt the Ostomy Adjustment Scale to a Norwegian version and to assess its construct validity and 2 components of its reliability (internal consistency and test-retest reliability). One hundred fifty-eight of 217 patients (73%) with a colostomy, ileostomy, or urostomy participated in the study. Slightly more than half (56%) were men. Their mean age was 64 years (range, 26-91 years). All respondents had undergone ostomy surgery at least 3 months before participation in the study. The Ostomy Adjustment Scale was translated into Norwegian according to standard procedures for forward and backward translation. The questionnaire was sent to the participants via regular post. The Cronbach alpha and test-retest were computed to assess reliability. Construct validity was evaluated via correlations between each item and score sums; correlations were used to analyze relationships between the Ostomy Adjustment Scale and the 36-item Short Form Health Survey, the Quality of Life Scale, the Hospital Anxiety & Depression Scale, and the General Self-Efficacy Scale. The Cronbach alpha was 0.93, and test-retest reliability r was 0.69. The average correlation quotient item to sum score was 0.49 (range, 0.31-0.73). Results showed moderate negative correlations between the Ostomy Adjustment Scale and the Hospital Anxiety and Depression Scale (-0.37 and -0.40), and moderate positive correlations between the Ostomy Adjustment Scale and the 36-item Short Form Health Survey, the Quality of Life Scale, and the General Self-Efficacy Scale (0.30-0.45) with the exception of the pain domain in the Short Form 36 (0.28). Regression analysis showed linear associations between the Ostomy Adjustment Scale and sociodemographic and clinical variables with the exception of education. The Norwegian language version of the Ostomy Adjustment Scale was found to possess construct validity, along with internal consistency and test-retest reliability. The instrument is

  12. Linear shaped charge

    DOEpatents

    Peterson, David; Stofleth, Jerome H.; Saul, Venner W.

    2017-07-11

    Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.

  13. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    PubMed Central

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  14. 37 CFR 1.705 - Patent term adjustment determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent Term....18(e); and (2) A statement of the facts involved, specifying: (i) The correct patent term adjustment....703(a) through (e) for which an adjustment is sought and the adjustment as specified in § 1.703(f) to...

  15. Kinematic synthesis of adjustable robotic mechanisms

    NASA Astrophysics Data System (ADS)

    Chuenchom, Thatchai

    1993-01-01

    Conventional hard automation, such as a linkage-based or a cam-driven system, provides high speed capability and repeatability but not the flexibility required in many industrial applications. The conventional mechanisms, that are typically single-degree-of-freedom systems, are being increasingly replaced by multi-degree-of-freedom multi-actuators driven by logic controllers. Although this new trend in sophistication provides greatly enhanced flexibility, there are many instances where the flexibility needs are exaggerated and the associated complexity is unnecessary. Traditional mechanism-based hard automation, on the other hand, neither can fulfill multi-task requirements nor are cost-effective mainly due to lack of methods and tools to design-in flexibility. This dissertation attempts to bridge this technological gap by developing Adjustable Robotic Mechanisms (ARM's) or 'programmable mechanisms' as a middle ground between high speed hard automation and expensive serial jointed-arm robots. This research introduces the concept of adjustable robotic mechanisms towards cost-effective manufacturing automation. A generalized analytical synthesis technique has been developed to support the computational design of ARM's that lays the theoretical foundation for synthesis of adjustable mechanisms. The synthesis method developed in this dissertation, called generalized adjustable dyad and triad synthesis, advances the well-known Burmester theory in kinematics to a new level. While this method provides planar solutions, a novel patented scheme is utilized for converting prescribed three-dimensional motion specifications into sets of planar projections. This provides an analytical and a computational tool for designing adjustable mechanisms that satisfy multiple sets of three-dimensional motion specifications. Several design issues were addressed, including adjustable parameter identification, branching defect, and mechanical errors. An efficient mathematical scheme for

  16. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  17. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    PubMed

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. 42 CFR 416.172 - Adjustments to national payment rates.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Adjustments to national payment rates. 416.172... Adjustments to national payment rates. (a) General rule. Contractors adjust the payment rates established for...; or (2) The geographically adjusted payment rate determined under this subpart. (c) Geographic...

  19. 42 CFR 416.172 - Adjustments to national payment rates.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Adjustments to national payment rates. 416.172... Adjustments to national payment rates. (a) General rule. Contractors adjust the payment rates established for...; or (2) The geographically adjusted payment rate determined under this subpart. (c) Geographic...

  20. A generalized Lyapunov theory for robust root clustering of linear state space models with real parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1992-01-01

    The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.

  1. 21 CFR 880.5100 - AC-powered adjustable hospital bed.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false AC-powered adjustable hospital bed. 880.5100... (CONTINUED) MEDICAL DEVICES GENERAL HOSPITAL AND PERSONAL USE DEVICES General Hospital and Personal Use Therapeutic Devices § 880.5100 AC-powered adjustable hospital bed. (a) Identification. An AC-powered...

  2. 21 CFR 880.5100 - AC-powered adjustable hospital bed.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false AC-powered adjustable hospital bed. 880.5100... (CONTINUED) MEDICAL DEVICES GENERAL HOSPITAL AND PERSONAL USE DEVICES General Hospital and Personal Use Therapeutic Devices § 880.5100 AC-powered adjustable hospital bed. (a) Identification. An AC-powered...

  3. 21 CFR 880.5100 - AC-powered adjustable hospital bed.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false AC-powered adjustable hospital bed. 880.5100... (CONTINUED) MEDICAL DEVICES GENERAL HOSPITAL AND PERSONAL USE DEVICES General Hospital and Personal Use Therapeutic Devices § 880.5100 AC-powered adjustable hospital bed. (a) Identification. An AC-powered...

  4. 21 CFR 880.5100 - AC-powered adjustable hospital bed.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false AC-powered adjustable hospital bed. 880.5100... (CONTINUED) MEDICAL DEVICES GENERAL HOSPITAL AND PERSONAL USE DEVICES General Hospital and Personal Use Therapeutic Devices § 880.5100 AC-powered adjustable hospital bed. (a) Identification. An AC-powered...

  5. Unification of the general non-linear sigma model and the Virasoro master equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boer, J. de; Halpern, M.B.

    1997-06-01

    The Virasoro master equation describes a large set of conformal field theories known as the affine-Virasoro constructions, in the operator algebra (affinie Lie algebra) of the WZW model, while the einstein equations of the general non-linear sigma model describe another large set of conformal field theories. This talk summarizes recent work which unifies these two sets of conformal field theories, together with a presumable large class of new conformal field theories. The basic idea is to consider spin-two operators of the form L{sub ij}{partial_derivative}x{sup i}{partial_derivative}x{sup j} in the background of a general sigma model. The requirement that these operators satisfymore » the Virasoro algebra leads to a set of equations called the unified Einstein-Virasoro master equation, in which the spin-two spacetime field L{sub ij} cuples to the usual spacetime fields of the sigma model. The one-loop form of this unified system is presented, and some of its algebraic and geometric properties are discussed.« less

  6. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  7. On the Feasibility of a Generalized Linear Program

    DTIC Science & Technology

    1989-03-01

    generealized linear program by applying the same algorithm to a "phase-one" problem without requiring that the initial basic feasible solution to the latter be non-degenerate. secUrMTY C.AMlIS CAYI S OP ?- PAeES( UII -W & ,

  8. Modeling the frequency of opposing left-turn conflicts at signalized intersections using generalized linear regression models.

    PubMed

    Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei

    2014-01-01

    The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.

  9. General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles

    NASA Astrophysics Data System (ADS)

    Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J.

    2017-09-01

    The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.

  10. Next Linear Collider Home Page

    Science.gov Websites

    Welcome to the Next Linear Collider NLC Home Page If you would like to learn about linear colliders in general and about this next-generation linear collider project's mission, design ideas, and Linear Collider. line | NLC Home | NLC Technical | SLAC | mcdunn Tuesday, February 14, 2006 01:32:11 PM

  11. Mediation analysis when a continuous mediator is measured with error and the outcome follows a generalized linear model

    PubMed Central

    Valeri, Linda; Lin, Xihong; VanderWeele, Tyler J.

    2014-01-01

    Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis-measured the validity of mediation analysis can be severely undermined. In this paper we first study the bias of classical, non-differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure-mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non-linearities the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. PMID:25220625

  12. A Comparison between Linear IRT Observed-Score Equating and Levine Observed-Score Equating under the Generalized Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen

    2012-01-01

    In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…

  13. Radiation phantom with humanoid shape and adjustable thickness

    DOEpatents

    Lehmann, Joerg [Pleasanton, CA; Levy, Joshua [Salem, NY; Stern, Robin L [Lodi, CA; Siantar, Christine Hartmann [Livermore, CA; Goldberg, Zelanna [Carmichael, CA

    2006-12-19

    A radiation phantom comprising a body with a general humanoid shape and at least a portion having an adjustable thickness. In one embodiment, the portion with an adjustable thickness comprises at least one tissue-equivalent slice.

  14. Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.

    PubMed

    Trninić, Marko; Jeličić, Mario; Papić, Vladan

    2015-07-01

    In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.

  15. Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process

    NASA Astrophysics Data System (ADS)

    Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas

    2018-05-01

    This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.

  16. Assessing the Tangent Linear Behaviour of Common Tracer Transport Schemes and Their Use in a Linearised Atmospheric General Circulation Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kent, James

    2015-01-01

    The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.

  17. 17 CFR 143.8 - Inflation-adjusted civil monetary penalties.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Inflation-adjusted civil... JURISDICTION General Provisions § 143.8 Inflation-adjusted civil monetary penalties. (a) Unless otherwise amended by an act of Congress, the inflation-adjusted maximum civil monetary penalty for each violation of...

  18. Sources of stress for students in high school college preparatory and general education programs: group differences and associations with adjustment.

    PubMed

    Suldo, Shannon M; Shaunessy, Elizabeth; Thalji, Amanda; Michalowski, Jessica; Shaffer, Emily

    2009-01-01

    Navigating puberty while developing independent living skills may render adolescents particularly vulnerable to stress, which may ultimately contribute to mental health problems (Compas, Orosan, & Grant, 1993; Elgar, Arlett, & Groves, 2003). The academic transition to high school presents additional challenges as youth are required to interact with a new and larger peer group and manage greater academic expectations. For students enrolled in academically rigorous college preparatory programs, such as the International Baccalaureate (IB) program, the amount of stress perceived may be greater than typical (Suldo, Shaunessy, & Hardesty, 2008). This study investigated the environmental stressors and psychological adjustment of 162 students participating in the IB program and a comparison sample of 157 students in general education. Factor analysis indicated students experience 7 primary categories of stressors, which were examined in relation to students' adjustment specific to academic and psychological functioning. The primary source of stress experienced by IB students was related to academic requirements. In contrast, students in the general education program indicated higher levels of stressors associated with parent-child relations, academic struggles, conflict within family, and peer relations, as well as role transitions and societal problems. Comparisons of correlations between categories of stressors and students' adjustment by curriculum group reveal that students in the IB program reported more symptoms of psychopathology and reduced academic functioning as they experienced higher levels of stress, particularly stressors associated with academic requirements, transitions and societal problems, academic struggles, and extra-curricular activities. Applied implications stem from findings suggesting that students in college preparatory programs are more likely to (a) experience elevated stress related to academic demands as opposed to more typical adolescent

  19. Generalized Jeans' Escape of Pick-Up Ions in Quasi-Linear Relaxation

    NASA Technical Reports Server (NTRS)

    Moore, T. E.; Khazanov, G. V.

    2011-01-01

    Jeans escape is a well-validated formulation of upper atmospheric escape that we have generalized to estimate plasma escape from ionospheres. It involves the computation of the parts of particle velocity space that are unbound by the gravitational potential at the exobase, followed by a calculation of the flux carried by such unbound particles as they escape from the potential well. To generalize this approach for ions, we superposed an electrostatic ambipolar potential and a centrifugal potential, for motions across and along a divergent magnetic field. We then considered how the presence of superthermal electrons, produced by precipitating auroral primary electrons, controls the ambipolar potential. We also showed that the centrifugal potential plays a small role in controlling the mass escape flux from the terrestrial ionosphere. We then applied the transverse ion velocity distribution produced when ions, picked up by supersonic (i.e., auroral) ionospheric convection, relax via quasi-linear diffusion, as estimated for cometary comas [1]. The results provide a theoretical basis for observed ion escape response to electromagnetic and kinetic energy sources. They also suggest that super-sonic but sub-Alfvenic flow, with ion pick-up, is a unique and important regime of ion-neutral coupling, in which plasma wave-particle interactions are driven by ion-neutral collisions at densities for which the collision frequency falls near or below the gyro-frequency. As another possible illustration of this process, the heliopause ribbon discovered by the IBEX mission involves interactions between the solar wind ions and the interstellar neutral gas, in a regime that may be analogous [2].

  20. 26 CFR 301.6231(a)(6)-1 - Computational adjustments.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 18 2013-04-01 2013-04-01 false Computational adjustments. 301.6231(a)(6)-1... Computational adjustments. (a) Changes in a partner's tax liability—(1) In general. A change in the tax... 63 of the Internal Revenue Code is made through a computational adjustment. A computational...

  1. 26 CFR 301.6231(a)(6)-1 - Computational adjustments.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 18 2012-04-01 2012-04-01 false Computational adjustments. 301.6231(a)(6)-1... Computational adjustments. (a) Changes in a partner's tax liability—(1) In general. A change in the tax... 63 of the Internal Revenue Code is made through a computational adjustment. A computational...

  2. 26 CFR 301.6231(a)(6)-1 - Computational adjustments.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 18 2011-04-01 2011-04-01 false Computational adjustments. 301.6231(a)(6)-1... Computational adjustments. (a) Changes in a partner's tax liability—(1) In general. A change in the tax... 63 of the Internal Revenue Code is made through a computational adjustment. A computational...

  3. 26 CFR 301.6231(a)(6)-1 - Computational adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Computational adjustments. 301.6231(a)(6)-1... Computational adjustments. (a) Changes in a partner's tax liability—(1) In general. A change in the tax... 63 of the Internal Revenue Code is made through a computational adjustment. A computational...

  4. 26 CFR 301.6231(a)(6)-1 - Computational adjustments.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 18 2014-04-01 2014-04-01 false Computational adjustments. 301.6231(a)(6)-1... Computational adjustments. (a) Changes in a partner's tax liability—(1) In general. A change in the tax... 63 of the Internal Revenue Code is made through a computational adjustment. A computational...

  5. Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.

    PubMed

    Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul

    2015-01-01

    Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.

  6. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  7. On Fitting Generalized Linear Mixed-effects Models for Binary Responses using Different Statistical Packages

    PubMed Central

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.

    2011-01-01

    Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252

  8. Massive parallelization of serial inference algorithms for a complex generalized linear model

    PubMed Central

    Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David

    2014-01-01

    Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363

  9. A general parallel sparse-blocked matrix multiply for linear scaling SCF theory

    NASA Astrophysics Data System (ADS)

    Challacombe, Matt

    2000-06-01

    A general approach to the parallel sparse-blocked matrix-matrix multiply is developed in the context of linear scaling self-consistent-field (SCF) theory. The data-parallel message passing method uses non-blocking communication to overlap computation and communication. The space filling curve heuristic is used to achieve data locality for sparse matrix elements that decay with “separation”. Load balance is achieved by solving the bin packing problem for blocks with variable size.With this new method as the kernel, parallel performance of the simplified density matrix minimization (SDMM) for solution of the SCF equations is investigated for RHF/6-31G ∗∗ water clusters and RHF/3-21G estane globules. Sustained rates above 5.7 GFLOPS for the SDMM have been achieved for (H 2 O) 200 with 95 Origin 2000 processors. Scalability is found to be limited by load imbalance, which increases with decreasing granularity, due primarily to the inhomogeneous distribution of variable block sizes.

  10. New Parents’ Psychological Adjustment and Trajectories of Early Parental Involvement

    PubMed Central

    Jia, Rongfang; Kotila, Letitia E.; Schoppe-Sullivan, Sarah J.; Kamp Dush, Claire M.

    2016-01-01

    Trajectories of parental involvement time (engagement and child care) across 3, 6, and 9 months postpartum and associations with parents’ own and their partners’ psychological adjustment (dysphoria, anxiety, and empathic personal distress) were examined using a sample of dual-earner couples experiencing first-time parenthood (N = 182 couples). Using time diary measures that captured intensive parenting moments, hierarchical linear modeling analyses revealed that patterns of associations between psychological adjustment and parental involvement time depended on the parenting domain, aspect of psychological adjustment, and parent gender. Psychological adjustment difficulties tended to bias the 2-parent system toward a gendered pattern of “mother step in” and “father step out,” as father involvement tended to decrease, and mother involvement either remained unchanged or increased, in response to their own and their partners’ psychological adjustment difficulties. In contrast, few significant effects were found in models using parental involvement to predict psychological adjustment. PMID:27397935

  11. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  12. Linear fixed-field multipass arcs for recirculating linear accelerators

    DOE PAGES

    Morozov, V. S.; Bogacz, S. A.; Roblin, Y. R.; ...

    2012-06-14

    Recirculating Linear Accelerators (RLA's) provide a compact and efficient way of accelerating particle beams to medium and high energies by reusing the same linac for multiple passes. In the conventional scheme, after each pass, the different energy beams coming out of the linac are separated and directed into appropriate arcs for recirculation, with each pass requiring a separate fixed-energy arc. In this paper we present a concept of an RLA return arc based on linear combined-function magnets, in which two and potentially more consecutive passes with very different energies are transported through the same string of magnets. By adjusting themore » dipole and quadrupole components of the constituting linear combined-function magnets, the arc is designed to be achromatic and to have zero initial and final reference orbit offsets for all transported beam energies. We demonstrate the concept by developing a design for a droplet-shaped return arc for a dog-bone RLA capable of transporting two beam passes with momenta different by a factor of two. Finally, we present the results of tracking simulations of the two passes and lay out the path to end-to-end design and simulation of a complete dog-bone RLA.« less

  13. ALDO: A radiation-tolerant, low-noise, adjustable low drop-out linear regulator in 0.35 μm CMOS technology

    NASA Astrophysics Data System (ADS)

    Carniti, P.; Cassina, L.; Gotti, C.; Maino, M.; Pessina, G.

    2016-07-01

    In this work we present ALDO, an adjustable low drop-out linear regulator designed in AMS 0.35 μm CMOS technology. It is specifically tailored for use in the upgraded LHCb RICH detector in order to improve the power supply noise for the front end readout chip (CLARO). ALDO is designed with radiation-tolerant solutions such as an all-MOS band-gap voltage reference and layout techniques aiming to make it able to operate in harsh environments like High Energy Physics accelerators. It is capable of driving up to 200 mA while keeping an adequate power supply filtering capability in a very wide frequency range from 10 Hz up to 100 MHz. This property allows us to suppress the noise and high frequency spikes that could be generated by a DC/DC regulator, for example. ALDO also shows a very low noise of 11.6 μV RMS in the same frequency range. Its output is protected with over-current and short detection circuits for a safe integration in tightly packed environments. Design solutions and measurements of the first prototype are presented.

  14. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Relationship of early-life stress and resilience to military adjustment in a young adulthood population.

    PubMed

    Choi, Kang; Im, Hyoungjune; Kim, Joohan; Choi, Kwang H; Jon, Duk-In; Hong, Hyunju; Hong, Narei; Lee, Eunjung; Seok, Jeong-Ho

    2013-11-01

    Early-life stress (ELS) may mediate adjustment problems while resilience may protect individuals against adjustment problems during military service. We investigated the relationship of ELS and resilience with adjustment problem factor scores in the Korea Military Personality Test (KMPT) in candidates for the military service. Four hundred and sixty-one candidates participated in this study. Vulnerability traits for military adjustment, ELS, and resilience were assessed using the KMPT, the Korean Early-Life Abuse Experience Questionnaire, and the Resilience Quotient Test, respectively. Data were analyzed using multiple linear regression analyses. The final model of the multiple linear regression analyses explained 30.2 % of the total variances of the sum of the adjustment problem factor scores of the KMPT. Neglect and exposure to domestic violence had a positive association with the total adjustment problem factor scores of the KMPT, but emotion control, impulse control, and optimism factor scores as well as education and occupational status were inversely associated with the total military adjustment problem score. ELS and resilience are important modulating factors in adjusting to military service. We suggest that neglect and exposure to domestic violence during early life may increase problem with adjustment, but capacity to control emotion and impulse as well as optimistic attitude may play protective roles in adjustment to military life. The screening procedures for ELS and the development of psychological interventions may be helpful for young adults to adjust to military service.

  16. Compact tunable silicon photonic differential-equation solver for general linear time-invariant systems.

    PubMed

    Wu, Jiayang; Cao, Pan; Hu, Xiaofeng; Jiang, Xinhong; Pan, Ting; Yang, Yuxing; Qiu, Ciyuan; Tremblay, Christine; Su, Yikai

    2014-10-20

    We propose and experimentally demonstrate an all-optical temporal differential-equation solver that can be used to solve ordinary differential equations (ODEs) characterizing general linear time-invariant (LTI) systems. The photonic device implemented by an add-drop microring resonator (MRR) with two tunable interferometric couplers is monolithically integrated on a silicon-on-insulator (SOI) wafer with a compact footprint of ~60 μm × 120 μm. By thermally tuning the phase shifts along the bus arms of the two interferometric couplers, the proposed device is capable of solving first-order ODEs with two variable coefficients. The operation principle is theoretically analyzed, and system testing of solving ODE with tunable coefficients is carried out for 10-Gb/s optical Gaussian-like pulses. The experimental results verify the effectiveness of the fabricated device as a tunable photonic ODE solver.

  17. Process Setting through General Linear Model and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Senjuntichai, Angsumalin

    2010-10-01

    The objective of this study is to improve the efficiency of the flow-wrap packaging process in soap industry through the reduction of defectives. At the 95% confidence level, with the regression analysis, the sealing temperature, temperatures of upper and lower crimper are found to be the significant factors for the flow-wrap process with respect to the number/percentage of defectives. Twenty seven experiments have been designed and performed according to three levels of each controllable factor. With the general linear model (GLM), the suggested values for the sealing temperature, temperatures of upper and lower crimpers are 185, 85 and 85° C, respectively while the response surface method (RSM) provides the optimal process conditions at 186, 89 and 88° C. Due to different assumptions between percentage of defective and all three temperature parameters, the suggested conditions from the two methods are then slightly different. Fortunately, the estimated percentage of defectives at 5.51% under GLM process condition and the predicted percentage of defectives at 4.62% under RSM process condition are not significant different. But at 95% confidence level, the percentage of defectives under RSM condition can be much lower approximately 2.16% than those under GLM condition in accordance with wider variation. Lastly, the percentages of defectives under the conditions suggested by GLM and RSM are reduced by 55.81% and 62.95%, respectively.

  18. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  19. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  20. Suboptimal decision making by children with ADHD in the face of risk: Poor risk adjustment and delay aversion rather than general proneness to taking risks.

    PubMed

    Sørensen, Lin; Sonuga-Barke, Edmund; Eichele, Heike; van Wageningen, Heidi; Wollschlaeger, Daniel; Plessen, Kerstin Jessica

    2017-02-01

    Suboptimal decision making in the face of risk (DMR) in children with attention-deficit hyperactivity disorder (ADHD) may be mediated by deficits in a number of different neuropsychological processes. We investigated DMR in children with ADHD using the Cambridge Gambling Task (CGT) to distinguish difficulties in adjusting to changing probabilities of choice outcomes (so-called risk adjustment) from general risk proneness, and to distinguish these 2 processes from delay aversion (the tendency to choose the least delayed option) and impairments in the ability to reflect on choice options. Based on previous research, we predicted that suboptimal performance on this task in children with ADHD would be primarily relate to problems with risk adjustment and delay aversion rather than general risk proneness. Drug naïve children with ADHD (n = 36), 8 to 12 years, and an age-matched group of typically developing children (n = 34) performed the CGT. As predicted, children with ADHD were not more prone to making risky choices (i.e., risk proneness). However, they had difficulty adjusting to changing risk levels and were more delay aversive-with these 2 effects being correlated. Our findings add to the growing body of evidence that children with ADHD do not favor risk taking per se when performing gambling tasks, but rather may lack the cognitive skills or motivational style to appraise changing patterns of risk effectively. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. 16 CFR 1.98 - Adjustment of civil monetary penalty amounts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF PRACTICE GENERAL PROCEDURES Civil Penalty Adjustments Under the Federal Civil Penalties Inflation... monetary penalty amounts. This section makes inflation adjustments in the dollar amounts of civil monetary...

  2. Injunctive and Descriptive Peer Group Norms and the Academic Adjustment of Rural Early Adolescents

    ERIC Educational Resources Information Center

    Hamm, Jill V.; Schmid, Lorrie; Farmer, Thomas W.; Locke, Belinda

    2011-01-01

    This study integrates diverse literatures on peer group influence by conceptualizing and examining the relationship of peer group injunctive norms to the academic adjustment of a large and ethnically diverse sample of rural early adolescents' academic adjustment. Results of three-level hierarchical linear modeling indicated that peer groups were…

  3. Solar radiation increases suicide rate after adjusting for other climate factors in South Korea.

    PubMed

    Jee, Hee-Jung; Cho, Chul-Hyun; Lee, Yu Jin; Choi, Nari; An, Hyonggin; Lee, Heon-Jeong

    2017-03-01

    Previous studies have indicated that suicide rates have significant seasonal variations. There is seasonal discordance between temperature and solar radiation due to the monsoon season in South Korea. We investigated the seasonality of suicide and assessed its association with climate variables in South Korea. Suicide rates were obtained from the National Statistical Office of South Korea, and climatic data were obtained from the Korea Meteorological Administration for the period of 1992-2010. We conducted analyses using a generalized additive model (GAM). First, we explored the seasonality of suicide and climate variables such as mean temperature, daily temperature range, solar radiation, and relative humidity. Next, we identified confounding climate variables associated with suicide rate. To estimate the adjusted effect of solar radiation on the suicide rate, we investigated the confounding variables using a multivariable GAM. Suicide rate showed seasonality with a pattern similar to that of solar radiation. We found that the suicide rate increased 1.008 times when solar radiation increased by 1 MJ/m 2 after adjusting for other confounding climate factors (P < 0.001). Solar radiation has a significant linear relationship with suicide after adjusting for region, other climate variables, and time trends. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. On fitting generalized linear mixed-effects models for binary responses using different statistical packages.

    PubMed

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M

    2011-09-10

    The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  5. 78 FR 56868 - Adjustment of Indemnification for Inflation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-16

    ... subsection 170d. of the Atomic Energy Act of 1954 (AEA), 42 U.S.C. 2210d., commonly known as the Price... DEPARTMENT OF ENERGY Adjustment of Indemnification for Inflation AGENCY: Office of General Counsel, U.S Department of Energy. ACTION: Notice of adjusted indemnification amount. SUMMARY: The Department...

  6. ASSOCIATIVE ADJUSTMENTS TO REDUCE ERRORS IN DOCUMENT SEARCHING.

    ERIC Educational Resources Information Center

    BRYANT, EDWARD C.; AND OTHERS

    ASSOCIATIVE ADJUSTMENTS TO A DOCUMENT FILE ARE CONSIDERED AS A MEANS FOR IMPROVING RETRIEVAL. A THEORETICAL INVESTIGATION OF THE STATISTICAL PROPERTIES OF A GENERALIZED MISMATCH MEASURE WAS CARRIED OUT AND IMPROVEMENTS IN RETRIEVAL RESULTING FROM PERFORMING ASSOCIATIVE REGRESSION ADJUSTMENTS ON DATA FILE WERE EXAMINED BOTH FROM THE THEORETICAL AND…

  7. 37 CFR 1.705 - Patent term adjustment determination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Patent term adjustment determination. 1.705 Section 1.705 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent Term...

  8. 37 CFR 1.705 - Patent term adjustment determination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Patent term adjustment determination. 1.705 Section 1.705 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent Term...

  9. 37 CFR 1.705 - Patent term adjustment determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Patent term adjustment determination. 1.705 Section 1.705 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent Term...

  10. 37 CFR 1.705 - Patent term adjustment determination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Patent term adjustment determination. 1.705 Section 1.705 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent Term...

  11. Risk-adjusted models for adverse obstetric outcomes and variation in risk-adjusted outcomes across hospitals.

    PubMed

    Bailit, Jennifer L; Grobman, William A; Rice, Madeline Murguia; Spong, Catherine Y; Wapner, Ronald J; Varner, Michael W; Thorp, John M; Leveno, Kenneth J; Caritis, Steve N; Shubert, Phillip J; Tita, Alan T; Saade, George; Sorokin, Yoram; Rouse, Dwight J; Blackwell, Sean C; Tolosa, Jorge E; Van Dorsten, J Peter

    2013-11-01

    Regulatory bodies and insurers evaluate hospital quality using obstetrical outcomes, however meaningful comparisons should take preexisting patient characteristics into account. Furthermore, if risk-adjusted outcomes are consistent within a hospital, fewer measures and resources would be needed to assess obstetrical quality. Our objective was to establish risk-adjusted models for 5 obstetric outcomes and assess hospital performance across these outcomes. We studied a cohort of 115,502 women and their neonates born in 25 hospitals in the United States from March 2008 through February 2011. Hospitals were ranked according to their unadjusted and risk-adjusted frequency of venous thromboembolism, postpartum hemorrhage, peripartum infection, severe perineal laceration, and a composite neonatal adverse outcome. Correlations between hospital risk-adjusted outcome frequencies were assessed. Venous thromboembolism occurred too infrequently (0.03%; 95% confidence interval [CI], 0.02-0.04%) for meaningful assessment. Other outcomes occurred frequently enough for assessment (postpartum hemorrhage, 2.29%; 95% CI, 2.20-2.38, peripartum infection, 5.06%; 95% CI, 4.93-5.19, severe perineal laceration at spontaneous vaginal delivery, 2.16%; 95% CI, 2.06-2.27, neonatal composite, 2.73%; 95% CI, 2.63-2.84). Although there was high concordance between unadjusted and adjusted hospital rankings, several individual hospitals had an adjusted rank that was substantially different (as much as 12 rank tiers) than their unadjusted rank. None of the correlations between hospital-adjusted outcome frequencies was significant. For example, the hospital with the lowest adjusted frequency of peripartum infection had the highest adjusted frequency of severe perineal laceration. Evaluations based on a single risk-adjusted outcome cannot be generalized to overall hospital obstetric performance. Copyright © 2013 Mosby, Inc. All rights reserved.

  12. General methods for determining the linear stability of coronal magnetic fields

    NASA Technical Reports Server (NTRS)

    Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.

    1988-01-01

    A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.

  13. [Adjustment disorder and DSM-5: A review].

    PubMed

    Appart, A; Lange, A-K; Sievert, I; Bihain, F; Tordeurs, D

    2017-02-01

    higher than for the adjustment disorder. According to their relevance and their content, we have split the articles into seven subcategories: 1. General description: most scientific articles generally describe the adjustment disorder as being a transition diagnosis, which is ambiguous, marginal and difficult to detect. The findings claim that only a few studies have been conducted on the adjustment disorder despite a high prevalence in the general population and in the clinical field. 2. the DSM-5 defined the adjustment disorder as a set of different outcomes and syndromes induced by stress after a difficult life event. While the link to other disorders has not been mentioned, the diagnosis of this disorder is no longer excluded or perceived as a secondary diagnosis. The DSM-5 faced criticism from three points of view: the operationalization of the concept of stress, the differential diagnosis and the description. 3. Prevalence: different samples have shown a significantly high prevalence of the adjustment disorder within the population. In addition to the psychiatric pain induced by difficult life events we need to emphasize the fact that 12.5 to 19.4 percent of the patients faced heavy and severe pathologies and depended on clinical care and treatment. 4. Etiology, comorbidity or associated symptomatology: the literature identified the tendency to commit suicide and stressful life events as being two fundamental characteristics of adjustment disorder. The third one is the personality profile. 5. that motivates researchers to focus on the adjustment disorder: the differentiation approach as to the major depression. Indeed, the aetiology, the symptomatology and the treatment differ from the adjustment disorder. 6. very recently, Dutch researchers have developed and validated the Diagnostic Interview Adjustment Disorder (DIAD). 7. in 2014, no data or meta-analysis recommended drug treatment in addition to therapy. In fact, several authors have demonstrated the

  14. Adjustment Issues Affecting Employment for Immigrants from the Former Soviet Union.

    ERIC Educational Resources Information Center

    Yost, Anastasia Dimun; Lucas, Margaretha S.

    2002-01-01

    Describes major issues, including culture shock and loss of status, that affect general adjustment of immigrants and refugees from the former Soviet Union who are resettling in the United States. Issues that affect career and employment adjustment are described and the interrelatedness of general and career issues is explored. (Contains 39…

  15. 77 FR 69442 - Federal Acquisition Regulation; Information Collection; Economic Price Adjustment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-19

    ...; Information Collection; Economic Price Adjustment AGENCIES: Department of Defense (DOD), General Services... economic price adjustment. Public comments are particularly invited on: Whether this collection of..., Economic Price Adjustment by any of the following methods: Regulations.gov : http://www.regulations.gov...

  16. Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.

    PubMed

    Dropkin, Greg

    2016-11-24

    Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.

  17. General methods for determining the linear stability of coronal magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, I.J.D.; Sneyd, A.D.; McClymont, A.N.

    1988-12-01

    A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak. 19 references.

  18. Airfoil profiles for minimum pressure drag at supersonic velocities -- general analysis with application to linearized supersonic flow

    NASA Technical Reports Server (NTRS)

    Chapman, Dean R

    1952-01-01

    A theoretical investigation is made of the airfoil profile for minimum pressure drag at zero lift in supersonic flow. In the first part of the report a general method is developed for calculating the profile having the least pressure drag for a given auxiliary condition, such as a given structural requirement or a given thickness ratio. The various structural requirements considered include bending strength, bending stiffness, torsional strength, and torsional stiffness. No assumption is made regarding the trailing-edge thickness; the optimum value is determined in the calculations as a function of the base pressure. To illustrate the general method, the optimum airfoil, defined as the airfoil having minimum pressure drag for a given auxiliary condition, is calculated in a second part of the report using the equations of linearized supersonic flow.

  19. 5 CFR 9701.322 - Setting and adjusting rate ranges.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Pay and Pay Administration Setting and Adjusting Rate Ranges... operational reasons, these adjustments will become effective on or about the date of the annual General...

  20. Observer-based distributed adaptive fault-tolerant containment control of multi-agent systems with general linear dynamics.

    PubMed

    Ye, Dan; Chen, Mengmeng; Li, Kui

    2017-11-01

    In this paper, we consider the distributed containment control problem of multi-agent systems with actuator bias faults based on observer method. The objective is to drive the followers into the convex hull spanned by the dynamic leaders, where the input is unknown but bounded. By constructing an observer to estimate the states and bias faults, an effective distributed adaptive fault-tolerant controller is developed. Different from the traditional method, an auxiliary controller gain is designed to deal with the unknown inputs and bias faults together. Moreover, the coupling gain can be adjusted online through the adaptive mechanism without using the global information. Furthermore, the proposed control protocol can guarantee that all the signals of the closed-loop systems are bounded and all the followers converge to the convex hull with bounded residual errors formed by the dynamic leaders. Finally, a decoupled linearized longitudinal motion model of the F-18 aircraft is used to demonstrate the effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, D.M.; Chen, J.Y.

    1994-12-31

    The authors are concerned with the solution of the linear system (1): Au = b, where A is a real square nonsingular matrix which is large, sparse and non-symmetric. They consider the use of Krylov subspace methods. They first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup {minus}1}B of (1). They also choose an auxiliary matrix Z which is nonsingular. For n = 1,2,{hor_ellipsis} they determine u{sup (n)} such that u{sup (n)} {minus} u{sup (0)}{epsilon}K{sub n}(r{sup (0)},A) where K{sub n}(r{sup (0)},A) is the (Krylov) subspace spanned by the Krylov vectors r{sup (0)}, Ar{sup (0)}, {hor_ellipsis},more » A{sup n{minus}1}r{sup 0} and where r{sup (0)} = b{minus}Au{sup (0)}. If ZA is SPD they also require that (u{sup (n)}{minus}{bar u}, ZA(u{sup (n)}{minus}{bar u})) be minimized. If, on the other hand, ZA is not SPD, then they require that the Galerkin condition, (Zr{sup n}, v) = 0, be satisfied for all v{epsilon}K{sub n}(r{sup (0)}, A) where r{sup n} = b{minus}Au{sup (n)}. In this paper the authors consider a generalization of GMRES. This generalized method, which they refer to as `MGMRES`, is very similar to GMRES except that they let Z = A{sup T}Y where Y is a nonsingular matrix which is symmetric by not necessarily SPD.« less

  2. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  3. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  4. Finite-dimensional linear approximations of solutions to general irregular nonlinear operator equations and equations with quadratic operators

    NASA Astrophysics Data System (ADS)

    Kokurin, M. Yu.

    2010-11-01

    A general scheme for improving approximate solutions to irregular nonlinear operator equations in Hilbert spaces is proposed and analyzed in the presence of errors. A modification of this scheme designed for equations with quadratic operators is also examined. The technique of universal linear approximations of irregular equations is combined with the projection onto finite-dimensional subspaces of a special form. It is shown that, for finite-dimensional quadratic problems, the proposed scheme provides information about the global geometric properties of the intersections of quadrics.

  5. 39 CFR 3010.14 - Contents of notice of rate adjustment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Contents of notice of rate adjustment. 3010.14... Adjustments) § 3010.14 Contents of notice of rate adjustment. (a) General. The Postal Service notice of rate... sources; (4) The amount of new unused rate authority, if any, that will be generated by the rate...

  6. Property-process relations in simulated clinical abrasive adjusting of dental ceramics.

    PubMed

    Yin, Ling

    2012-12-01

    This paper reports on property-process correlations in simulated clinical abrasive adjusting of a wide range of dental restorative ceramics using a dental handpiece and diamond burs. The seven materials studied included four mica-containing glass ceramics, a feldspathic porcelain, a glass-infiltrated alumina, and a yttria-stabilized tetragonal zirconia. The abrasive adjusting process was conducted under simulated clinical conditions using diamond burs and a clinical dental handpiece. An attempt was made to establish correlations between process characteristics in terms of removal rate, chipping damage, and surface finish and material mechanical properties of hardness, fracture toughness and Young's modulus. The results show that the removal rate is mainly a function of hardness, which decreases nonlinearly with hardness. No correlations were noted between the removal rates and the complex relations of hardness, Young's modulus and fracture toughness. Surface roughness was primarily a linear function of diamond grit size and was relatively independent of materials. Chipping damage in terms of the average chipping width decreased with fracture toughness except for glass-infiltrated alumina. It also had higher linear correlations with critical strain energy release rates (R²=0.66) and brittleness (R²=0.62) and a lower linear correlation with indices of brittleness (R²=0.32). Implications of these results can provide guidance for the microstructural design of dental ceramics, optimize performance, and guide the proper selection of technical parameters in clinical abrasive adjusting conducted by dental practitioners. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  8. Adjusting Expected Mortality Rates Using Information From a Control Population: An Example Using Socioeconomic Status.

    PubMed

    Bower, Hannah; Andersson, Therese M-L; Crowther, Michael J; Dickman, Paul W; Lambe, Mats; Lambert, Paul C

    2018-04-01

    Expected or reference mortality rates are commonly used in the calculation of measures such as relative survival in population-based cancer survival studies and standardized mortality ratios. These expected rates are usually presented according to age, sex, and calendar year. In certain situations, stratification of expected rates by other factors is required to avoid potential bias if interest lies in quantifying measures according to such factors as, for example, socioeconomic status. If data are not available on a population level, information from a control population could be used to adjust expected rates. We have presented two approaches for adjusting expected mortality rates using information from a control population: a Poisson generalized linear model and a flexible parametric survival model. We used a control group from BCBaSe-a register-based, matched breast cancer cohort in Sweden with diagnoses between 1992 and 2012-to illustrate the two methods using socioeconomic status as a risk factor of interest. Results showed that Poisson and flexible parametric survival approaches estimate similar adjusted mortality rates according to socioeconomic status. Additional uncertainty involved in the methods to estimate stratified, expected mortality rates described in this study can be accounted for using a parametric bootstrap, but this might make little difference if using a large control population.

  9. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    PubMed

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  10. Ancestral haplotype-based association mapping with generalized linear mixed models accounting for stratification.

    PubMed

    Zhang, Z; Guillaume, F; Sartelet, A; Charlier, C; Georges, M; Farnir, F; Druet, T

    2012-10-01

    In many situations, genome-wide association studies are performed in populations presenting stratification. Mixed models including a kinship matrix accounting for genetic relatedness among individuals have been shown to correct for population and/or family structure. Here we extend this methodology to generalized linear mixed models which properly model data under various distributions. In addition we perform association with ancestral haplotypes inferred using a hidden Markov model. The method was shown to properly account for stratification under various simulated scenari presenting population and/or family structure. Use of ancestral haplotypes resulted in higher power than SNPs on simulated datasets. Application to real data demonstrates the usefulness of the developed model. Full analysis of a dataset with 4600 individuals and 500 000 SNPs was performed in 2 h 36 min and required 2.28 Gb of RAM. The software GLASCOW can be freely downloaded from www.giga.ulg.ac.be/jcms/prod_381171/software. francois.guillaume@jouy.inra.fr Supplementary data are available at Bioinformatics online.

  11. Detection of genomic loci associated with environmental variables using generalized linear mixed models.

    PubMed

    Lobréaux, Stéphane; Melodelima, Christelle

    2015-02-01

    We tested the use of Generalized Linear Mixed Models to detect associations between genetic loci and environmental variables, taking into account the population structure of sampled individuals. We used a simulation approach to generate datasets under demographically and selectively explicit models. These datasets were used to analyze and optimize GLMM capacity to detect the association between markers and selective coefficients as environmental data in terms of false and true positive rates. Different sampling strategies were tested, maximizing the number of populations sampled, sites sampled per population, or individuals sampled per site, and the effect of different selective intensities on the efficiency of the method was determined. Finally, we apply these models to an Arabidopsis thaliana SNP dataset from different accessions, looking for loci associated with spring minimal temperature. We identified 25 regions that exhibit unusual correlations with the climatic variable and contain genes with functions related to temperature stress. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Short Round Sub-Linear Zero-Knowledge Argument for Linear Algebraic Relations

    NASA Astrophysics Data System (ADS)

    Seo, Jae Hong

    Zero-knowledge arguments allows one party to prove that a statement is true, without leaking any other information than the truth of the statement. In many applications such as verifiable shuffle (as a practical application) and circuit satisfiability (as a theoretical application), zero-knowledge arguments for mathematical statements related to linear algebra are essentially used. Groth proposed (at CRYPTO 2009) an elegant methodology for zero-knowledge arguments for linear algebraic relations over finite fields. He obtained zero-knowledge arguments of the sub-linear size for linear algebra using reductions from linear algebraic relations to equations of the form z = x *' y, where x, y ∈ Fnp are committed vectors, z ∈ Fp is a committed element, and *' : Fnp × Fnp → Fp is a bilinear map. These reductions impose additional rounds on zero-knowledge arguments of the sub-linear size. The round complexity of interactive zero-knowledge arguments is an important measure along with communication and computational complexities. We focus on minimizing the round complexity of sub-linear zero-knowledge arguments for linear algebra. To reduce round complexity, we propose a general transformation from a t-round zero-knowledge argument, satisfying mild conditions, to a (t - 2)-round zero-knowledge argument; this transformation is of independent interest.

  13. 8 CFR 280.53 - Civil monetary penalties inflation adjustment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Civil monetary penalties inflation... REGULATIONS IMPOSITION AND COLLECTION OF FINES § 280.53 Civil monetary penalties inflation adjustment. (a) In general. In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act of...

  14. A Generalized Simple Formulation of Convective Adjustment Timescale for Cumulus Convection Parameterizations

    EPA Science Inventory

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a pres...

  15. A GLM Post-processor to Adjust Ensemble Forecast Traces

    NASA Astrophysics Data System (ADS)

    Thiemann, M.; Day, G. N.; Schaake, J. C.; Draijer, S.; Wang, L.

    2011-12-01

    The skill of hydrologic ensemble forecasts has improved in the last years through a better understanding of climate variability, better climate forecasts and new data assimilation techniques. Having been extensively utilized for probabilistic water supply forecasting, interest is developing to utilize these forecasts in operational decision making. Hydrologic ensemble forecast members typically have inherent biases in flow timing and volume caused by (1) structural errors in the models used, (2) systematic errors in the data used to calibrate those models, (3) uncertain initial hydrologic conditions, and (4) uncertainties in the forcing datasets. Furthermore, hydrologic models have often not been developed for operational decision points and ensemble forecasts are thus not always available where needed. A statistical post-processor can be used to address these issues. The post-processor should (1) correct for systematic biases in flow timing and volume, (2) preserve the skill of the available raw forecasts, (3) preserve spatial and temporal correlation as well as the uncertainty in the forecasted flow data, (4) produce adjusted forecast ensembles that represent the variability of the observed hydrograph to be predicted, and (5) preserve individual forecast traces as equally likely. The post-processor should also allow for the translation of available ensemble forecasts to hydrologically similar locations where forecasts are not available. This paper introduces an ensemble post-processor (EPP) developed in support of New York City water supply operations. The EPP employs a general linear model (GLM) to (1) adjust available ensemble forecast traces and (2) create new ensembles for (nearby) locations where only historic flow observations are available. The EPP is calibrated by developing daily and aggregated statistical relationships form historical flow observations and model simulations. These are then used in operation to obtain the conditional probability density

  16. 5 CFR 530.307 - OPM review and adjustment of special rate schedules.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... 5305(d), special rate schedule adjustments made by OPM have the force and effect of statute. (d)(1) For... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false OPM review and adjustment of special rate... REGULATIONS PAY RATES AND SYSTEMS (GENERAL) Special Rate Schedules for Recruitment and Retention General...

  17. The Mediating Role of Psychological Adjustment between Peer Victimization and Social Adjustment in Adolescence

    PubMed Central

    Romera, Eva M.; Gómez-Ortiz, Olga; Ortega-Ruiz, Rosario

    2016-01-01

    There is extensive scientific evidence of the serious psychological and social effects that peer victimization may have on students, among them internalizing problems such as anxiety or negative self-esteem, difficulties related to low self-efficacy and lower levels of social adjustment. Although a direct relationship has been observed between victimization and these effects, it has not yet been analyzed whether there is a relationship of interdependence between all these measures of psychosocial adjustment. The aim of this study was to examine the relationship between victimization and difficulties related to social adjustment among high school students. To do so, various explanatory models were tested to determine whether psychological adjustment (negative self-esteem, social anxiety and social self-efficacy) could play a mediating role in this relationship, as suggested by other studies on academic adjustment. The sample comprised 2060 Spanish high school students (47.9% girls; mean age = 14.34). The instruments used were the scale of victimization from European Bullying Intervention Project Questionnaire, the negative scale from Rosenberg Self-Esteem Scale, Social Anxiety Scale for Adolescents and a general item about social self-efficacy, all of them self-reports. Structural equation modeling was used to analyze the data. The results confirmed the partial mediating role of negative self-esteem, social anxiety and social self-efficacy between peer victimization and social adjustment and highlight the importance of empowering victimized students to improve their self-esteem and self-efficacy and prevent social anxiety. Such problems lead to the avoidance of social interactions and social reinforcement, thus making it difficult for these students to achieve adequate social adjustment. PMID:27891108

  18. The Mediating Role of Psychological Adjustment between Peer Victimization and Social Adjustment in Adolescence.

    PubMed

    Romera, Eva M; Gómez-Ortiz, Olga; Ortega-Ruiz, Rosario

    2016-01-01

    There is extensive scientific evidence of the serious psychological and social effects that peer victimization may have on students, among them internalizing problems such as anxiety or negative self-esteem, difficulties related to low self-efficacy and lower levels of social adjustment. Although a direct relationship has been observed between victimization and these effects, it has not yet been analyzed whether there is a relationship of interdependence between all these measures of psychosocial adjustment. The aim of this study was to examine the relationship between victimization and difficulties related to social adjustment among high school students. To do so, various explanatory models were tested to determine whether psychological adjustment (negative self-esteem, social anxiety and social self-efficacy) could play a mediating role in this relationship, as suggested by other studies on academic adjustment. The sample comprised 2060 Spanish high school students (47.9% girls; mean age = 14.34). The instruments used were the scale of victimization from European Bullying Intervention Project Questionnaire , the negative scale from Rosenberg Self-Esteem Scale, Social Anxiety Scale for Adolescents and a general item about social self-efficacy, all of them self-reports. Structural equation modeling was used to analyze the data. The results confirmed the partial mediating role of negative self-esteem, social anxiety and social self-efficacy between peer victimization and social adjustment and highlight the importance of empowering victimized students to improve their self-esteem and self-efficacy and prevent social anxiety. Such problems lead to the avoidance of social interactions and social reinforcement, thus making it difficult for these students to achieve adequate social adjustment.

  19. Generalizing a categorization of students' interpretations of linear kinematics graphs

    NASA Astrophysics Data System (ADS)

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-06-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.

  20. Associations between frequency of bullying involvement and adjustment in adolescence.

    PubMed

    Gower, Amy L; Borowsky, Iris W

    2013-01-01

    To examine whether infrequent bullying perpetration and victimization (once or twice a month) are associated with elevated levels of internalizing and externalizing problems and to assess evidence for a minimum frequency threshold for bullying involvement. The analytic sample included 128,681 6th, 9th, and 12th graders who completed the 2010 Minnesota Student Survey. Logistic regression and general linear models examined the association between bullying frequency and adjustment correlates including emotional distress, self-harm, physical fighting, and substance use while controlling for demographic characteristics. Gender and grade were included as moderators. Infrequent bullying perpetration and victimization were associated with increased levels of all adjustment problems relative to those who did not engage in bullying in the past 30 days. Grade moderated many of these findings, with perpetration frequency being more strongly related to substance use, self-harm, and suicidal ideation for 6th graders than 12th graders, whereas victimization frequency was associated with self-harm more strongly for 12th graders than 6th graders. Evidence for minimum thresholds for bullying involvement across all outcomes, grades, and bullying roles was inconsistent. Infrequent bullying involvement may pose risks to adolescent adjustment; thus, clinicians and school personnel should address even isolated instances of bullying behavior. Researchers should reexamine the use of cut points in bullying research in order to more fully understand the nature of bullying in adolescence. These data indicate the need for prevention and intervention programs that target diverse internalizing and externalizing problems for bullies and victims, regardless of bullying frequency. Copyright © 2013 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  1. 8 CFR 1280.53 - Civil monetary penalties inflation adjustment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Civil monetary penalties inflation... penalties inflation adjustment. (a) In general. In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act of 1990, Pub. L. 101-410, 104 Stat. 890, as amended by the Debt...

  2. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  3. The British Sign Language Versions of the Patient Health Questionnaire, the Generalized Anxiety Disorder 7-Item Scale, and the Work and Social Adjustment Scale

    ERIC Educational Resources Information Center

    Rogers, Katherine D.; Young, Alys; Lovell, Karina; Campbell, Malcolm; Scott, Paul R.; Kendal, Sarah

    2013-01-01

    The present study is aimed to translate 3 widely used clinical assessment measures into British Sign Language (BSL), to pilot the BSL versions, and to establish their validity and reliability. These were the Patient Health Questionnaire (PHQ-9), the Generalized Anxiety Disorder 7-item (GAD-7) scale, and the Work and Social Adjustment Scale (WSAS).…

  4. The Unified Levelling Network of Sarawak and its Adjustment

    NASA Astrophysics Data System (ADS)

    Som, Z. A. M.; Yazid, A. M.; Ming, T. K.; Yazid, N. M.

    2016-09-01

    The height reference network of Sarawak has seen major improvement in over the past two decades. The most significant improvement was the establishment of extended precise leveling network of which is now able to connect all three major datum points at Pulau Lakei, Original and Bintulu. Datum by following the major accessible routes across Sarawak. This means the leveling network in Sarawak has now been inter-connected and unified. By having such a unified network leads to the possibility of having a common single least squares adjustment been performed for the first time. The least squares adjustment of this unified levelling network was attempted in order to compute the height of all Bench Marks established in the entire levelling network. The adjustment was done by using MoreFix levelling adjustment package developed at FGHT UTM. The computational procedure adopted is linear parametric adjustment by minimum constraint. Since Sarawak has three separate datums therefore three separate adjustments were implemented by utilizing datum at Pulau Lakei, Original Miri and Bintulu Datum respectively. Results of the MoreFix unified adjustment agreed very well with adjustment repeated using Starnet. Further the results were compared with solution given by Jupem and they are in good agreement as well. The difference in height analysed were within 10mm for the case of minimum constraint at Pulau Lakei datum and with much better agreement in the case of Original Miri Datum.

  5. Serum osteoprotegerin and renal function in the general population: the Tromsø Study.

    PubMed

    Vik, Anders; Brodin, Ellen E; Mathiesen, Ellisiv B; Brox, Jan; Jørgensen, Lone; Njølstad, Inger; Brækkan, Sigrid K; Hansen, John-Bjarne

    2017-02-01

    Serum osteoprotegerin (OPG) is elevated in patients with chronic kidney disease (CKD) and increases with decreasing renal function. However, there are limited data regarding the association between OPG and renal function in the general population. The aim of the present study was to explore the relation between serum OPG and renal function in subjects recruited from the general population. We conducted a cross-sectional study with 6689 participants recruited from the general population in Tromsø, Norway. Estimated glomerular filtration rate (eGFR) was calculated using the Chronic Kidney Disease Epidemiology Collaboration equations. OPG was modelled both as a continuous and categorical variable. General linear models and linear regression with adjustment for possible confounders were used to study the association between OPG and eGFR. Analyses were stratified by the median age, as serum OPG and age displayed a significant interaction on eGFR. In participants ≤62.2 years with normal renal function (eGFR ≥90 mL/min/1.73 m 2 ) eGFR increased by 0.35 mL/min/1.73 m 2 (95% CI 0.13-0.56) per 1 standard deviation (SD) increase in serum OPG after multiple adjustment. In participants older than the median age with impaired renal function (eGFR <90 mL/min/1.73 m 2 ), eGFR decreased by 1.54 (95% CI -2.06 to -1.01) per 1 SD increase in serum OPG. OPG was associated with an increased eGFR in younger subjects with normal renal function and with a decreased eGFR in older subjects with reduced renal function. Our findings imply that the association between OPG and eGFR varies with age and renal function.

  6. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    PubMed

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  7. Use of instrumental variables in the analysis of generalized linear models in the presence of unmeasured confounding with applications to epidemiological research.

    PubMed

    Johnston, K M; Gustafson, P; Levy, A R; Grootendorst, P

    2008-04-30

    A major, often unstated, concern of researchers carrying out epidemiological studies of medical therapy is the potential impact on validity if estimates of treatment are biased due to unmeasured confounders. One technique for obtaining consistent estimates of treatment effects in the presence of unmeasured confounders is instrumental variables analysis (IVA). This technique has been well developed in the econometrics literature and is being increasingly used in epidemiological studies. However, the approach to IVA that is most commonly used in such studies is based on linear models, while many epidemiological applications make use of non-linear models, specifically generalized linear models (GLMs) such as logistic or Poisson regression. Here we present a simple method for applying IVA within the class of GLMs using the generalized method of moments approach. We explore some of the theoretical properties of the method and illustrate its use within both a simulation example and an epidemiological study where unmeasured confounding is suspected to be present. We estimate the effects of beta-blocker therapy on one-year all-cause mortality after an incident hospitalization for heart failure, in the absence of data describing disease severity, which is believed to be a confounder. 2008 John Wiley & Sons, Ltd

  8. Tooth loss and general quality of life in dentate adults from Southern Brazil.

    PubMed

    Haag, Dandara Gabriela; Peres, Karen Glazer; Brennan, David Simon

    2017-10-01

    This study aimed to estimate the association between the number of teeth and general quality of life in adults. A population-based study was conducted with 1720 individuals aged 20-59 years residing in Florianópolis, Brazil, in 2009. Data were collected at participants' households using a structured questionnaire. In 2012, a second wave was undertaken with 1222 individuals. Oral examinations were performed for number of teeth, prevalence of functional dentition (≥21 natural teeth), and shortened dental arch (SDA), which were considered the main exposures. General quality of life was the outcome and was assessed with the WHO Abbreviated Instrument for Quality of Life (WHOQOL-BREF). Covariates included sociodemographic factors, health-related behaviors, and chronic diseases. Multivariable linear regression models were performed to test the associations between the main exposures and the outcome adjusted for covariates. In 2012, 1222 individuals participated in the study (response rate = 71.1%). Having more teeth was associated with greater scores on physical domain of the WHOQOL-BREF [β = 0.24 (95% CI 0.01; 0.46)] after adjustment for covariates. Absence of functional dentition was associated with lower scores on the physical domain [β = -3.94 (95% CI -7.40; -0.48)] in the adjusted analysis. There was no association between both SDA definitions and the domains of general quality of life. Oral health as measured by tooth loss was associated with negative impacts on general quality of life assessed by the WHOQOL-BREF. There was a lack of evidence that SDA is a condition that negatively affects general quality of life.

  9. 50 CFR 648.108 - Framework adjustments to management measures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Management Measures for the Summer Flounder Fisheries § 648.108 Framework adjustments to management measures... Council, at any time, may initiate action to add or adjust management measures within the Summer Flounder... revised text is set forth as follows: § 648.108 Summer flounder gear restrictions. (a) General. (1) Otter...

  10. 45 CFR 153.350 - Risk adjustment data validation standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Risk adjustment data validation standards. 153.350... validation standards. (a) General requirement. The State, or HHS on behalf of the State, must ensure proper implementation of any risk adjustment software and ensure proper validation of a statistically valid sample of...

  11. 45 CFR 153.350 - Risk adjustment data validation standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Risk adjustment data validation standards. 153.350... validation standards. (a) General requirement. The State, or HHS on behalf of the State, must ensure proper implementation of any risk adjustment software and ensure proper validation of a statistically valid sample of...

  12. 5 CFR 531.207 - Applying annual pay adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Applying annual pay adjustments. 531.207 Section 531.207 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER THE GENERAL SCHEDULE Determining Rate of Basic Pay General Provisions § 531.207 Applying annual...

  13. A mathematical theory of learning control for linear discrete multivariable systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Longman, Richard W.

    1988-01-01

    When tracking control systems are used in repetitive operations such as robots in various manufacturing processes, the controller will make the same errors repeatedly. Here consideration is given to learning controllers that look at the tracking errors in each repetition of the process and adjust the control to decrease these errors in the next repetition. A general formalism is developed for learning control of discrete-time (time-varying or time-invariant) linear multivariable systems. Methods of specifying a desired trajectory (such that the trajectory can actually be performed by the discrete system) are discussed, and learning controllers are developed. Stability criteria are obtained which are relatively easy to use to insure convergence of the learning process, and proper gain settings are discussed in light of measurement noise and system uncertainties.

  14. 13 CFR 307.6 - Economic Adjustment Assistance post-approval requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Economic Adjustment Assistance post-approval requirements. 307.6 Section 307.6 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.6 Economic...

  15. 13 CFR 307.6 - Economic Adjustment Assistance post-approval requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Economic Adjustment Assistance post-approval requirements. 307.6 Section 307.6 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.6 Economic...

  16. 13 CFR 307.6 - Economic Adjustment Assistance post-approval requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Economic Adjustment Assistance post-approval requirements. 307.6 Section 307.6 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.6 Economic...

  17. 13 CFR 307.6 - Economic Adjustment Assistance post-approval requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Economic Adjustment Assistance post-approval requirements. 307.6 Section 307.6 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.6 Economic...

  18. 13 CFR 307.6 - Economic Adjustment Assistance post-approval requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Economic Adjustment Assistance post-approval requirements. 307.6 Section 307.6 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.6 Economic...

  19. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  20. Progress in linear optics, non-linear optics and surface alignment of liquid crystals

    NASA Astrophysics Data System (ADS)

    Ong, H. L.; Meyer, R. B.; Hurd, A. J.; Karn, A. J.; Arakelian, S. M.; Shen, Y. R.; Sanda, P. N.; Dove, D. B.; Jansen, S. A.; Hoffmann, R.

    We first discuss the progress in linear optics, in particular, the formulation and application of geometrical-optics approximation and its generalization. We then discuss the progress in non-linear optics, in particular, the enhancement of a first-order Freedericksz transition and intrinsic optical bistability in homeotropic and parallel oriented nematic liquid crystal cells. Finally, we discuss the liquid crystal alignment and surface effects on field-induced Freedericksz transition.

  1. Linear positioning laser calibration setup of CNC machine tools

    NASA Astrophysics Data System (ADS)

    Sui, Xiulin; Yang, Congjing

    2002-10-01

    The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.

  2. Embodied linearity of speed control in Drosophila melanogaster.

    PubMed

    Medici, V; Fry, S N

    2012-12-07

    Fruitflies regulate flight speed by adjusting their body angle. To understand how low-level posture control serves an overall linear visual speed control strategy, we visually induced free-flight acceleration responses in a wind tunnel and measured the body kinematics using high-speed videography. Subsequently, we reverse engineered the transfer function mapping body pitch angle onto flight speed. A linear model is able to reproduce the behavioural data with good accuracy. Our results show that linearity in speed control is realized already at the level of body posture-mediated speed control and is therefore embodied at the level of the complex aerodynamic mechanisms of body and wings. Together with previous results, this study reveals the existence of a linear hierarchical control strategy, which can provide relevant control principles for biomimetic implementations, such as autonomous flying micro air vehicles.

  3. Embodied linearity of speed control in Drosophila melanogaster

    PubMed Central

    Medici, V.; Fry, S. N.

    2012-01-01

    Fruitflies regulate flight speed by adjusting their body angle. To understand how low-level posture control serves an overall linear visual speed control strategy, we visually induced free-flight acceleration responses in a wind tunnel and measured the body kinematics using high-speed videography. Subsequently, we reverse engineered the transfer function mapping body pitch angle onto flight speed. A linear model is able to reproduce the behavioural data with good accuracy. Our results show that linearity in speed control is realized already at the level of body posture-mediated speed control and is therefore embodied at the level of the complex aerodynamic mechanisms of body and wings. Together with previous results, this study reveals the existence of a linear hierarchical control strategy, which can provide relevant control principles for biomimetic implementations, such as autonomous flying micro air vehicles. PMID:22933185

  4. Solving a class of generalized fractional programming problems using the feasibility of linear programs.

    PubMed

    Shen, Peiping; Zhang, Tongli; Wang, Chunfeng

    2017-01-01

    This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.

  5. 13 CFR 307.3 - Use of Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Use of Economic Adjustment Assistance Investments. 307.3 Section 307.3 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.3 Use of Economic...

  6. 13 CFR 307.2 - Criteria for Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Criteria for Economic Adjustment Assistance Investments. 307.2 Section 307.2 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.2 Criteria for...

  7. 13 CFR 307.2 - Criteria for Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Criteria for Economic Adjustment Assistance Investments. 307.2 Section 307.2 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.2 Criteria for...

  8. 13 CFR 307.2 - Criteria for Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Criteria for Economic Adjustment Assistance Investments. 307.2 Section 307.2 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.2 Criteria for...

  9. 13 CFR 307.2 - Criteria for Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Criteria for Economic Adjustment Assistance Investments. 307.2 Section 307.2 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.2 Criteria for...

  10. 13 CFR 307.3 - Use of Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Use of Economic Adjustment Assistance Investments. 307.3 Section 307.3 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.3 Use of Economic...

  11. 13 CFR 307.3 - Use of Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Use of Economic Adjustment Assistance Investments. 307.3 Section 307.3 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.3 Use of Economic...

  12. 13 CFR 307.3 - Use of Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Use of Economic Adjustment Assistance Investments. 307.3 Section 307.3 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.3 Use of Economic...

  13. 13 CFR 307.3 - Use of Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Use of Economic Adjustment Assistance Investments. 307.3 Section 307.3 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.3 Use of Economic...

  14. 13 CFR 307.2 - Criteria for Economic Adjustment Assistance Investments.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Criteria for Economic Adjustment Assistance Investments. 307.2 Section 307.2 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC ADJUSTMENT ASSISTANCE INVESTMENTS General § 307.2 Criteria for...

  15. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  16. 49 CFR 393.47 - Brake actuators, slack adjusters, linings/pads and drums/rotors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Brake actuators, slack adjusters, linings/pads and..., slack adjusters, linings/pads and drums/rotors. (a) General requirements. Brake components must be... same size. (c) Slack adjusters. The effective length of the slack adjuster on each end of an axle must...

  17. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  18. Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Boskovic, Jovan D.

    2008-01-01

    This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.

  19. Remote control for anode-cathode adjustment

    DOEpatents

    Roose, Lars D.

    1991-01-01

    An apparatus for remotely adjusting the anode-cathode gap in a pulse power machine has an electric motor located within a hollow cathode inside the vacuum chamber of the pulse power machine. Input information for controlling the motor for adjusting the anode-cathode gap is fed into the apparatus using optical waveguides. The motor, controlled by the input information, drives a worm gear that moves a cathode tip. When the motor drives in one rotational direction, the cathode is moved toward the anode and the size of the anode-cathode gap is diminished. When the motor drives in the other direction, the cathode is moved away from the anode and the size of the anode-cathode gap is increased. The motor is powered by batteries housed in the hollow cathode. The batteries may be rechargeable, and they may be recharged by a photovoltaic cell in combination with an optical waveguide that receives recharging energy from outside the hollow cathode. Alternatively, the anode-cathode gap can be remotely adjusted by a manually-turned handle connected to mechanical linkage which is connected to a jack assembly. The jack assembly converts rotational motion of the handle and mechanical linkage to linear motion of the cathode moving toward or away from the anode.

  20. Efficient techniques for forced response involving linear modal components interconnected by discrete nonlinear connection elements

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; O'Callahan, John

    2009-01-01

    Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.

  1. Large deformation image classification using generalized locality-constrained linear coding.

    PubMed

    Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.

  2. 26 CFR 1.9001-4 - Adjustments required in computing excess-profits credit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 13 2010-04-01 2010-04-01 false Adjustments required in computing excess... Adjustments required in computing excess-profits credit. (a) In general. Subsection (f) of the Act provides adjustments required to be made in computing the excess-profits credit for any taxable year under the Excess...

  3. 28 CFR 85.1 - In general.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false In general. 85.1 Section 85.1 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL MONETARY PENALTIES INFLATION ADJUSTMENT § 85.1 In general. (a) In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act of...

  4. Resonance Parameter Adjustment Based on Integral Experiments

    DOE PAGES

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less

  5. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  6. Enhanced dielectric-wall linear accelerator

    DOEpatents

    Sampayan, Stephen E.; Caporaso, George J.; Kirbie, Hugh C.

    1998-01-01

    A dielectric-wall linear accelerator is enhanced by a high-voltage, fast e-time switch that includes a pair of electrodes between which are laminated alternating layers of isolated conductors and insulators. A high voltage is placed between the electrodes sufficient to stress the voltage breakdown of the insulator on command. A light trigger, such as a laser, is focused along at least one line along the edge surface of the laminated alternating layers of isolated conductors and insulators extending between the electrodes. The laser is energized to initiate a surface breakdown by a fluence of photons, thus causing the electrical switch to close very promptly. Such insulators and lasers are incorporated in a dielectric wall linear accelerator with Blumlein modules, and phasing is controlled by adjusting the length of fiber optic cables that carry the laser light to the insulator surface.

  7. Marital adjustment, marital discord over childrearing, and child behavior problems: moderating effects of child age.

    PubMed

    Mahoney, A; Jouriles, E N; Scavone, J

    1997-12-01

    Examined whether marital discord over childrearing contributes to child behavior problems after taking into account general marital adjustment, and if child age moderates associations between child behavior problems and either general marital adjustment or marital discord over childrearing. Participants were 146 two-parent families seeking services for their child's (4 to 9 years of age) conduct problems. Data on marital functioning and child behavior problems were collected from both parents. Mothers' and fathers' reports of marital discord over childrearing related positively to child externalizing problems after accounting for general marital adjustment. Child age moderated associations between fathers' reports of general marital adjustment and both internalizing and externalizing child problems, with associations being stronger in families with younger children. The discussion highlights the role that developmental factors may play in understanding the link between marital and child behavior problems in clinic-referred families.

  8. YADCLAN: yet another digitally-controlled linear artificial neuron.

    PubMed

    Frenger, Paul

    2003-01-01

    This paper updates the author's 1999 RMBS presentation on digitally controlled linear artificial neuron design. Each neuron is based on a standard operational amplifier having excitatory and inhibitory inputs, variable gain, an amplified linear analog output and an adjustable threshold comparator for digital output. This design employs a 1-wire serial network of digitally controlled potentiometers and resistors whose resistance values are set and read back under microprocessor supervision. This system embodies several unique and useful features, including: enhanced neuronal stability, dynamic reconfigurability and network extensibility. This artificial neuronal is being employed for feature extraction and pattern recognition in an advanced robotic application.

  9. 26 CFR 25.2701-5 - Adjustments to mitigate double taxation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Adjustments to mitigate double taxation. 25....2701-5 Adjustments to mitigate double taxation. (a) Reduction of transfer tax base—(1) In general. This... − $187,500). (g) Double taxation otherwise avoided. No reduction is available under this section if— (1...

  10. 26 CFR 25.2701-5 - Adjustments to mitigate double taxation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 14 2011-04-01 2010-04-01 true Adjustments to mitigate double taxation. 25.2701... Adjustments to mitigate double taxation. (a) Reduction of transfer tax base—(1) In general. This section... − $187,500). (g) Double taxation otherwise avoided. No reduction is available under this section if— (1...

  11. Economic evaluation of an implementation strategy for the management of low back pain in general practice.

    PubMed

    Jensen, Cathrine Elgaard; Riis, Allan; Petersen, Karin Dam; Jensen, Martin Bach; Pedersen, Kjeld Møller

    2017-05-01

    In connection with the publication of a clinical practice guideline on the management of low back pain (LBP) in general practice in Denmark, a cluster randomised controlled trial was conducted. In this trial, a multifaceted guideline implementation strategy to improve general practitioners' treatment of patients with LBP was compared with a usual implementation strategy. The aim was to determine whether the multifaceted strategy was cost effective, as compared with the usual implementation strategy. The economic evaluation was conducted as a cost-utility analysis where cost collected from a societal perspective and quality-adjusted life years were used as outcome measures. The analysis was conducted as a within-trial analysis with a 12-month time horizon consistent with the follow-up period of the clinical trial. To adjust for a priori selected covariates, generalised linear models with a gamma family were used to estimate incremental costs and quality-adjusted life years. Furthermore, both deterministic and probabilistic sensitivity analyses were conducted. Results showed that costs associated with primary health care were higher, whereas secondary health care costs were lower for the intervention group when compared with the control group. When adjusting for covariates, the intervention was less costly, and there was no significant difference in effect between the 2 groups. Sensitivity analyses showed that results were sensitive to uncertainty. In conclusion, the multifaceted implementation strategy was cost saving when compared with the usual strategy for implementing LBP clinical practice guidelines in general practice. Furthermore, there was no significant difference in effect, and the estimate was sensitive to uncertainty.

  12. Relation of Parental Transitions to Boys' Adjustment Problems: I.A. Linear Hypothesis II. Mothers at Risk for Transitions and Unskilled Parenting.

    ERIC Educational Resources Information Center

    Capaldi, D. M.; Patterson, G. R.

    1991-01-01

    Examined the adjustment of boys from intact, single-mother, stepfather, and multiple-transition families. Boys who had experienced multiple transitions showed the poorest adjustment. The antisocial mother was most at risk for transitions and unskilled parenting practices, which in turn placed her son at risk for poor adjustment. (BC)

  13. College student engaging in cyberbullying victimization: cognitive appraisals, coping strategies, and psychological adjustments.

    PubMed

    Na, Hyunjoo; Dancy, Barbara L; Park, Chang

    2015-06-01

    The study's purpose was to explore whether frequency of cyberbullying victimization, cognitive appraisals, and coping strategies were associated with psychological adjustments among college student cyberbullying victims. A convenience sample of 121 students completed questionnaires. Linear regression analyses found frequency of cyberbullying victimization, cognitive appraisals, and coping strategies respectively explained 30%, 30%, and 27% of the variance in depression, anxiety, and self-esteem. Frequency of cyberbullying victimization and approach and avoidance coping strategies were associated with psychological adjustments, with avoidance coping strategies being associated with all three psychological adjustments. Interventions should focus on teaching cyberbullying victims to not use avoidance coping strategies. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Smooth individual level covariates adjustment in disease mapping.

    PubMed

    Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise

    2018-05-01

    Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Effect of Facet Displacement on Radiation Field and Its Application for Panel Adjustment of Large Reflector Antenna

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Lian, Peiyuan; Zhang, Shuxin; Xiang, Binbin; Xu, Qian

    2017-05-01

    Large reflector antennas are widely used in radars, satellite communication, radio astronomy, and so on. The rapid developments in these fields have created demands for development of better performance and higher surface accuracy. However, low accuracy and low efficiency are the common disadvantages for traditional panel alignment and adjustment. In order to improve the surface accuracy of large reflector antenna, a new method is presented to determinate panel adjustment values from far field pattern. Based on the method of Physical Optics (PO), the effect of panel facet displacement on radiation field value is derived. Then the linear system is constructed between panel adjustment vector and far field pattern. Using the method of Singular Value Decomposition (SVD), the adjustment value for all panel adjustors are obtained by solving the linear equations. An experiment is conducted on a 3.7 m reflector antenna with 12 segmented panels. The results of simulation and test are similar, which shows that the presented method is feasible. Moreover, the discussion about validation shows that the method can be used for many cases of reflector shape. The proposed research provides the instruction to adjust surface panels efficiently and accurately.

  16. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, Longxiao; Gu, Hanming

    2018-03-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.

  17. 47 CFR 61.47 - Adjustments to the SBI; pricing bands.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Adjustments to the SBI; pricing bands. 61.47... (CONTINUED) TARIFFS General Rules for Dominant Carriers § 61.47 Adjustments to the SBI; pricing bands. (a) In...) Pricing bands shall be established each tariff year for each service category and subcategory within a...

  18. 47 CFR 61.47 - Adjustments to the SBI; pricing bands.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Adjustments to the SBI; pricing bands. 61.47... (CONTINUED) TARIFFS General Rules for Dominant Carriers § 61.47 Adjustments to the SBI; pricing bands. (a) In...) Pricing bands shall be established each tariff year for each service category and subcategory within a...

  19. Generalized two-dimensional (2D) linear system analysis metrics (GMTF, GDQE) for digital radiography systems including the effect of focal spot, magnification, scatter, and detector characteristics.

    PubMed

    Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen

    2010-03-01

    The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.

  20. Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs

    ERIC Educational Resources Information Center

    Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul

    2016-01-01

    We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…

  1. Communications circuit including a linear quadratic estimator

    DOEpatents

    Ferguson, Dennis D.

    2015-07-07

    A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.

  2. 78 FR 12318 - Federal Acquisition Regulation; Submission for OMB Review; Economic Price Adjustment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-22

    ...; Submission for OMB Review; Economic Price Adjustment AGENCY: Department of Defense (DOD), General Services... economic price adjustment. A notice was published in the Federal Register at 77 FR 69442, on November 19...: Submit comments identified by Information Collection 9000- 0068, Economic Price Adjustment by any of the...

  3. The Influence of Non-linear 3-D Mantle Rheology on Predictions of Glacial Isostatic Adjustment Models

    NASA Astrophysics Data System (ADS)

    Van Der Wal, W.; Barnhoorn, A.; Stocchi, P.; Drury, M. R.; Wu, P. P.; Vermeersen, B. L.

    2011-12-01

    Ice melting in Greenland and Antarctica can be estimated from GRACE satellite measurements. The largest source of error in these estimates is uncertainty in models for Glacial Isostatic Adjustment (GIA). GIA models that are used to correct the GRACE data have several shortcomings, including (i) mantle viscosity is only varied with depth, and (ii) stress-dependence of viscosity is ignored. Here we attempt to improve on these two issues with the ultimate goal of providing more realistic GIA predictions in areas that are currently ice covered. The improved model is first tested against observations in Fennoscandia, where there is good coverage with GIA observations, before applying it to Greenland. Deformation laws for diffusion and dislocation creep in olivine are taken from a compilation of laboratory experiments. Temperature is obtained from two different sources: surface heatflow maps as input for the heat transfer equation, and seismic velocity anomalies converted to upper mantle temperatures. Grain size and olivine water content are kept as free parameters. Surface loading is provided by an ice loading history that is constructed from constraints on past ice margins and input from climatology. The finite element model includes self-gravitation but not compressibility and background stresses. It is found that the viscosity in Fennoscandia changes in time by two orders of magnitude for a wet rheology with large grain size. The wet rheology provides the best fit to historic sea level data. However, present-day uplift and gravity rates are too low for such a rheology. We apply a wet rheology on Greenland, and simulate a Little Ice Age (LIA) increase in thickness on top of the ICE-5G ice loading history. Preliminary results show a negative geoid rate of magnitude more than 0.5 mm/year due to the LIA increase in ice thickness in combination with the non-linear upper mantle rheology. More tests are necessary to determine the influence of mantle rheology on GIA model

  4. Linear Logistic Test Modeling with R

    ERIC Educational Resources Information Center

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  5. 40 CFR 86.1603 - General requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certified to meet the appropriate high-altitude emission standards. (2) High-altitude adjustment... the general public. EPA encourages manufacturers to notify vehicle owners in high-altitude areas of the availability of high-altitude adjustments. (g) If altitude adjustments are performed according to...

  6. 40 CFR 86.1603 - General requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... certified to meet the appropriate high-altitude emission standards. (2) High-altitude adjustment... the general public. EPA encourages manufacturers to notify vehicle owners in high-altitude areas of the availability of high-altitude adjustments. (g) If altitude adjustments are performed according to...

  7. Gender Identity and Adjustment in Black, Hispanic, and White Preadolescents

    ERIC Educational Resources Information Center

    Corby, Brooke C.; Hodges, Ernest V. E.; Perry, David G.

    2007-01-01

    The generality of S. K. Egan and D. G. Perry's (2001) model of gender identity and adjustment was evaluated by examining associations between gender identity (felt gender typicality, felt gender contentedness, and felt pressure for gender conformity) and social adjustment in 863 White, Black, and Hispanic 5th graders (mean age = 11.1 years).…

  8. Linear approximations of nonlinear systems

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Su, R.

    1983-01-01

    The development of a method for designing an automatic flight controller for short and vertical take off aircraft is discussed. This technique involves transformations of nonlinear systems to controllable linear systems and takes into account the nonlinearities of the aircraft. In general, the transformations cannot always be given in closed form. Using partial differential equations, an approximate linear system called the modified tangent model was introduced. A linear transformation of this tangent model to Brunovsky canonical form can be constructed, and from this the linear part (about a state space point x sub 0) of an exact transformation for the nonlinear system can be found. It is shown that a canonical expansion in Lie brackets about the point x sub 0 yields the same modified tangent model.

  9. Blood pressure in relation to general and central adiposity among 500 000 adult Chinese men and women.

    PubMed

    Chen, Zhengming; Smith, Margaret; Du, Huaidong; Guo, Yu; Clarke, Robert; Bian, Zheng; Collins, Rory; Chen, Junshi; Qian, Yijian; Wang, Xiaoping; Chen, Xiaofang; Tian, Xiaocao; Wang, Xiaohuan; Peto, Richard; Li, Liming

    2015-08-01

    Greater adiposity is associated with higher blood pressure. Substantial uncertainty remains, however, about which measures of adiposity most strongly predict blood pressure and whether these associations differ materially between populations. We examined cross-sectional data on 500 000 adults recruited from 10 diverse localities across China during 2004-08. Multiple linear regression was used to estimate the effects on systolic blood pressure (SBP) of general adiposity [e.g. body mass index (BMI), body fat percentage, height-adjusted weight] vs central adiposity [e.g. waist circumference (WC), hip circumference (HC), waist-hip ratio (WHR)], before and after adjustment for each other. The main analyses excluded those reported taking any antihypertensive medication, and were adjusted for age, region and education. The overall mean [standard deviation (SD)] BMI was 23.6 (3.3) kg/m(2) and mean WC was 80.0 (9.5) cm. The differences in SBP (men/women, mmHg) per 1SD higher general adiposity (height-adjusted weight: 6.6/5.6; BMI: 5.5/4.9; body fat percentage: 5.5/5.0) were greater than for central adiposity (WC: 5.0/4.3; HC: 4.8/4.1; WHR: 3.7/3.2), with a 10 kg/m(2) greater BMI being associated on average with 16 (men/women: 17/14) mmHg higher SBP. The associations of blood pressure with measures of general adiposity were not materially altered by adjusting for WC and HC, but those for central adiposity were significantly attenuated after adjusting for BMI (WC: 1.1/0.7; HC: 0.3/-0.2; WHR: 0.6/0.6). In adult Chinese, blood pressure is more strongly associated with general adiposity than with central adiposity, and the associations with BMI were about 50% stronger than those observed in Western populations. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.

  10. Rational-spline approximation with automatic tension adjustment

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Kerr, P. A.

    1984-01-01

    An algorithm for weighted least-squares approximation with rational splines is presented. A rational spline is a cubic function containing a distinct tension parameter for each interval defined by two consecutive knots. For zero tension, the rational spline is identical to a cubic spline; for very large tension, the rational spline is a linear function. The approximation algorithm incorporates an algorithm which automatically adjusts the tension on each interval to fulfill a user-specified criterion. Finally, an example is presented comparing results of the rational spline with those of the cubic spline.

  11. 24 CFR 902.44 - Adjustment for physical condition and neighborhood environment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... and neighborhood environment. 902.44 Section 902.44 Housing and Urban Development REGULATIONS RELATING... Operations Indicator § 902.44 Adjustment for physical condition and neighborhood environment. (a) General. In... environment factors are: (1) Physical condition adjustment applies to projects at least 28 years old, based on...

  12. 24 CFR 902.44 - Adjustment for physical condition and neighborhood environment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... and neighborhood environment. 902.44 Section 902.44 Housing and Urban Development REGULATIONS RELATING... Operations Indicator § 902.44 Adjustment for physical condition and neighborhood environment. (a) General. In... environment factors are: (1) Physical condition adjustment applies to projects at least 28 years old, based on...

  13. 24 CFR 902.44 - Adjustment for physical condition and neighborhood environment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... and neighborhood environment. 902.44 Section 902.44 Housing and Urban Development REGULATIONS RELATING... Operations Indicator § 902.44 Adjustment for physical condition and neighborhood environment. (a) General. In... environment factors are: (1) Physical condition adjustment applies to projects at least 28 years old, based on...

  14. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: IV. Generalized matrix analysis of linear compartment systems.

    PubMed

    Langenbucher, Frieder

    2005-01-01

    A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.

  15. The Need and Keys for a New Generation Network Adjustment Software

    NASA Astrophysics Data System (ADS)

    Colomina, I.; Blázquez, M.; Navarro, J. A.; Sastre, J.

    2012-07-01

    Orientation and calibration of photogrammetric and remote sensing instruments is a fundamental capacity of current mapping systems and a fundamental research topic. Neither digital remote sensing acquisition systems nor direct orientation gear, like INS and GNSS technologies, made block adjustment obsolete. On the contrary, the continuous flow of new primary data acquisition systems has challenged the capacity of the legacy block adjustment systems - in general network adjustment systems - in many aspects: extensibility, genericity, portability, large data sets capacity, metadata support and many others. In this article, we concentrate on the extensibility and genericity challenges that current and future network systems shall face. For this purpose we propose a number of software design strategies with emphasis on rigorous abstract modeling that help in achieving simplicity, genericity and extensibility together with the protection of intellectual proper rights in a flexible manner. We illustrate our suggestions with the general design approach of GENA, the generic extensible network adjustment system of GeoNumerics.

  16. Enhanced dielectric-wall linear accelerator

    DOEpatents

    Sampayan, S.E.; Caporaso, G.J.; Kirbie, H.C.

    1998-09-22

    A dielectric-wall linear accelerator is enhanced by a high-voltage, fast e-time switch that includes a pair of electrodes between which are laminated alternating layers of isolated conductors and insulators. A high voltage is placed between the electrodes sufficient to stress the voltage breakdown of the insulator on command. A light trigger, such as a laser, is focused along at least one line along the edge surface of the laminated alternating layers of isolated conductors and insulators extending between the electrodes. The laser is energized to initiate a surface breakdown by a fluence of photons, thus causing the electrical switch to close very promptly. Such insulators and lasers are incorporated in a dielectric wall linear accelerator with Blumlein modules, and phasing is controlled by adjusting the length of fiber optic cables that carry the laser light to the insulator surface. 6 figs.

  17. Diagnostic Risk Adjustment for Medicaid: The Disability Payment System

    PubMed Central

    Kronick, Richard; Dreyfus, Tony; Lee, Lora; Zhou, Zhiyuan

    1996-01-01

    This article describes a system of diagnostic categories that Medicaid programs can use for adjusting capitation payments to health plans that enroll people with disability. Medicaid claims from Colorado, Michigan, Missouri, New York, and Ohio are analyzed to demonstrate that the greater predictability of costs among people with disabilities makes risk adjustment more feasible than for a general population and more critical to creating health systems for people with disability. The application of our diagnostic categories to State claims data is described, including estimated effects on subsequent-year costs of various diagnoses. The challenges of implementing adjustment by diagnosis are explored. PMID:10172665

  18. 42 CFR 419.43 - Adjustments to national program payment and beneficiary copayment amounts.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Drugs and biologicals that are paid under a separate APC; and (2) Items and services paid at charges... excluded from qualification for the payment adjustment in paragraph (g)(2) of this section: (i) Drugs and...) Payment adjustment for certain cancer hospitals—(1) General rule. CMS provides for a payment adjustment...

  19. 42 CFR 419.43 - Adjustments to national program payment and beneficiary copayment amounts.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Drugs and biologicals that are paid under a separate APC; and (2) Items and services paid at charges... excluded from qualification for the payment adjustment in paragraph (g)(2) of this section: (i) Drugs and...) Payment adjustment for certain cancer hospitals.—(1) General rule. CMS provides for a payment adjustment...

  20. Real-time imaging of human brain function by near-infrared spectroscopy using an adaptive general linear model

    PubMed Central

    Abdelnour, A. Farras; Huppert, Theodore

    2009-01-01

    Near-infrared spectroscopy is a non-invasive neuroimaging method which uses light to measure changes in cerebral blood oxygenation associated with brain activity. In this work, we demonstrate the ability to record and analyze images of brain activity in real-time using a 16-channel continuous wave optical NIRS system. We propose a novel real-time analysis framework using an adaptive Kalman filter and a state–space model based on a canonical general linear model of brain activity. We show that our adaptive model has the ability to estimate single-trial brain activity events as we apply this method to track and classify experimental data acquired during an alternating bilateral self-paced finger tapping task. PMID:19457389

  1. 7 CFR 1580.101 - General statement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF AGRICULTURE TRADE ADJUSTMENT ASSISTANCE FOR FARMERS § 1580.101 General statement. This part provides regulations for the Trade Adjustment Assistance for Farmers program. Under these provisions...

  2. Gain optimization with non-linear controls

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Kandadai, R. D.

    1984-01-01

    An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.

  3. 24 CFR 902.44 - Adjustment for physical condition and neighborhood environment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false Adjustment for physical condition... Operations Indicator § 902.44 Adjustment for physical condition and neighborhood environment. (a) General. In... situations outside the control of the project. These situations are related to the poor physical condition of...

  4. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, L.; Gu, H.

    2017-12-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.

  5. Instrumental variables as bias amplifiers with general outcome and confounding.

    PubMed

    Ding, P; VanderWeele, T J; Robins, J M

    2017-06-01

    Drawing causal inference with observational studies is the central pillar of many disciplines. One sufficient condition for identifying the causal effect is that the treatment-outcome relationship is unconfounded conditional on the observed covariates. It is often believed that the more covariates we condition on, the more plausible this unconfoundedness assumption is. This belief has had a huge impact on practical causal inference, suggesting that we should adjust for all pretreatment covariates. However, when there is unmeasured confounding between the treatment and outcome, estimators adjusting for some pretreatment covariate might have greater bias than estimators without adjusting for this covariate. This kind of covariate is called a bias amplifier, and includes instrumental variables that are independent of the confounder, and affect the outcome only through the treatment. Previously, theoretical results for this phenomenon have been established only for linear models. We fill in this gap in the literature by providing a general theory, showing that this phenomenon happens under a wide class of models satisfying certain monotonicity assumptions. We further show that when the treatment follows an additive or multiplicative model conditional on the instrumental variable and the confounder, these monotonicity assumptions can be interpreted as the signs of the arrows of the causal diagrams.

  6. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  7. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  8. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE PAGES

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...

    2017-10-06

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  9. 34 CFR 668.209 - Uncorrected data adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Uncorrected data adjustments. 668.209 Section 668.209 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.209...

  10. 34 CFR 668.190 - Uncorrected data adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Uncorrected data adjustments. 668.190 Section 668.190 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668...

  11. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  12. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  13. Return to work from long-term sick leave: a six-year prospective study of the importance of adjustment latitudes at work and home.

    PubMed

    Dellve, Lotta; Fallman, Sara L; Ahlstrom, Linda

    2016-01-01

    The aim was to investigate the long-term importance of adjustment latitude for increased work ability and return to work among female human service workers on long-term sick leave. A cohort of female human service workers on long-term sick leave (>60 days) was given a questionnaire four times (0, 6, 12, 60 months). Linear mixed models were used for longitudinal analysis of the repeated measurements of work ability and return to work. Having a higher level of adjustment latitude was associated with both increased work ability and return to work. Adjustments related to work pace were strongly associated with increased work ability, as were adjustments to the work place. Having individual opportunities for taking short breaks and a general acceptance of taking short breaks were associated with increased work ability. At home, a higher level of responsibility for household work was related to increased work ability and return to work. Individuals with possibilities for adjustment latitude, especially pace and place at work, and an acceptance of taking breaks had greater increased work ability over time and a greater work ability compared with individuals who did not have such opportunities. This study highlights the importance of opportunities for adjustment latitude at work to increase work ability and return to work among female human service workers who have been on long-term sick leave. The results support push and pull theories for individual decision-making on return to work.

  14. Signal location using generalized linear constraints

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.; Feldman, D. D.

    1992-01-01

    This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.

  15. ELAS: A general-purpose computer program for the equilibrium problems of linear structures. Volume 2: Documentation of the program. [subroutines and flow charts

    NASA Technical Reports Server (NTRS)

    Utku, S.

    1969-01-01

    A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.

  16. 34 CFR 668.210 - New data adjustments.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 3 2012-07-01 2012-07-01 false New data adjustments. 668.210 Section 668.210 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.210 New data...

  17. 34 CFR 668.191 - New data adjustments.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 3 2012-07-01 2012-07-01 false New data adjustments. 668.191 Section 668.191 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.191 New...

  18. 34 CFR 668.210 - New data adjustments.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false New data adjustments. 668.210 Section 668.210 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.210 New data...

  19. 34 CFR 668.191 - New data adjustments.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 3 2011-07-01 2011-07-01 false New data adjustments. 668.191 Section 668.191 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.191 New...

  20. 34 CFR 668.210 - New data adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false New data adjustments. 668.210 Section 668.210 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.210 New data...

  1. 34 CFR 668.191 - New data adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false New data adjustments. 668.191 Section 668.191 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.191 New...

  2. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  3. Symposium on General Linear Model Approach to the Analysis of Experimental Data in Educational Research (Athens, Georgia, June 29-July 1, 1967). Final Report.

    ERIC Educational Resources Information Center

    Bashaw, W. L., Ed.; Findley, Warren G., Ed.

    This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…

  4. Non-linear behavior of fiber composite laminates

    NASA Technical Reports Server (NTRS)

    Hashin, Z.; Bagchi, D.; Rosen, B. W.

    1974-01-01

    The non-linear behavior of fiber composite laminates which results from lamina non-linear characteristics was examined. The analysis uses a Ramberg-Osgood representation of the lamina transverse and shear stress strain curves in conjunction with deformation theory to describe the resultant laminate non-linear behavior. A laminate having an arbitrary number of oriented layers and subjected to a general state of membrane stress was treated. Parametric results and comparison with experimental data and prior theoretical results are presented.

  5. A Constrained Linear Estimator for Multiple Regression

    ERIC Educational Resources Information Center

    Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.

    2010-01-01

    "Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…

  6. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  7. Parenting Practices, Child Adjustment, and Family Diversity.

    ERIC Educational Resources Information Center

    Amato, Paul R.; Fowler, Frieda

    2002-01-01

    Uses data from the National Survey of Families and Households to test the generality of the links between parenting practices and child outcomes. Parents' reports of support, monitoring, and harsh punishment were associated in the expected direction with parents' reports of children's adjustment, school grades, and behavior problems, and with…

  8. Light-adjustable lens.

    PubMed Central

    Schwartz, Daniel M

    2003-01-01

    PURPOSE: First, to determine whether a silicone light-adjustable intraocular lens (IOL) can be fabricated and adjusted precisely with a light delivery device (LDD). Second, to determine the biocompatibility of an adjustable IOL and whether the lens can be adjusted precisely in vivo. METHODS: After fabrication of a light-adjustable silicone formulation, IOLs were made and tested in vitro for cytotoxicity, leaching, precision of adjustment, optical quality after adjustment, and mechanical properties. Light-adjustable IOLs were then tested in vivo for biocompatibility and precision of adjustment in a rabbit model. In collaboration with Zeiss-Meditec, a digital LDD was developed and tested to correct for higher-order aberrations in light-adjustable IOLs. RESULTS: The results establish that a biocompatible silicone IOL can be fabricated and adjusted using safe levels of light. There was no evidence of cytotoxicity or leaching. Testing of mechanical properties revealed no significant differences from commercial controls. Implantation of light-adjustable lenses in rabbits demonstrated- excellent biocompatibility after 6 months, comparable to a commercially available IOL. In vivo spherical (hyperopic and myopic) adjustment in rabbits was achieved using an analog light delivery system. The digital light delivery system was tested and achieved correction of higher-order aberrations. CONCLUSION: A silicone light-adjustable IOL and LDD have been developed to enable postoperative, noninvasive adjustment of lens power. The ability to correct higher-order aberrations in these materials has broad potential applicability for optimization of vision in patients undergoing cataract and refractive surgery. PMID:14971588

  9. 37 CFR 258.1 - General.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Patents, Trademarks, and Copyrights COPYRIGHT OFFICE, LIBRARY OF CONGRESS COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES ADJUSTMENT OF ROYALTY FEE FOR SECONDARY TRANSMISSIONS BY SATELLITE CARRIERS § 258.1 General. This part 258 adjusts the rates of royalties payable under the compulsory license for...

  10. 37 CFR 256.1 - General.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Patents, Trademarks, and Copyrights COPYRIGHT OFFICE, LIBRARY OF CONGRESS COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES ADJUSTMENT OF ROYALTY FEE FOR CABLE COMPULSORY LICENSE § 256.1 General. This part establishes adjusted terms and rates for royalty payments in accordance with the provisions of 17...

  11. Hospital costs associated with surgical site infections in general and vascular surgery patients.

    PubMed

    Boltz, Melissa M; Hollenbeak, Christopher S; Julian, Kathleen G; Ortenzi, Gail; Dillon, Peter W

    2011-11-01

    Although much has been written about excess cost and duration of stay (DOS) associated with surgical site infections (SSIs) after cardiothoracic surgery, less has been reported after vascular and general surgery. We used data from the National Surgical Quality Improvement Program (NSQIP) to estimate the total cost and DOS associated with SSIs in patients undergoing general and vascular surgery. Using standard NSQIP practices, data were collected on patients undergoing general and vascular surgery at a single academic center between 2007 and 2009 and were merged with fully loaded operating costs obtained from the hospital accounting database. Logistic regression was used to determine which patient and preoperative variables influenced the occurrence of SSIs. After adjusting for patient characteristics, costs and DOS were fit to linear regression models to determine the effect of SSIs. Of the 2,250 general and vascular surgery patients sampled, SSIs were observed in 186 inpatients. Predisposing factors of SSIs were male sex, insulin-dependent diabetes, steroid use, wound classification, and operative time (P < .05). After adjusting for those characteristics, the total excess cost and DOS attributable to SSIs were $10,497 (P < .0001) and 4.3 days (P < .0001), respectively. SSIs complicating general and vascular surgical procedures share many risk factors with SSIs after cardiothoracic surgery. Although the excess costs and DOS associated with SSIs after general and vascular surgery are somewhat less, they still represent substantial financial and opportunity costs to hospitals and suggest, along with the implications for patient care, a continuing need for cost-effective quality improvement and programs of infection prevention. Copyright © 2011 Mosby, Inc. All rights reserved.

  12. 78 FR 24336 - Rules of Practice and Procedure; Adjusting Civil Money Penalties for Inflation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-25

    ... courts. \\4\\ The CPI is published by the Department of Labor, Bureau of Statistics, and is available at.... Mathematical Calculation In general, the adjustment calculation required by the Inflation Adjustment Act is... adjusted in 2009. According to the Bureau of Labor Statistics, the CPI for June 1996 and June 2009 was 156...

  13. The Linear Bias in the Zeldovich Approximation and a Relation between the Number Density and the Linear Bias of Dark Halos

    NASA Astrophysics Data System (ADS)

    Fan, Zuhui

    2000-01-01

    The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.

  14. 45 CFR 153.630 - Data validation requirements when HHS operates risk adjustment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Data validation requirements when HHS operates... Program § 153.630 Data validation requirements when HHS operates risk adjustment. (a) General requirement... performed on its risk adjustment data as described in this section. (b) Initial validation audit. (1) An...

  15. 45 CFR 153.630 - Data validation requirements when HHS operates risk adjustment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Data validation requirements when HHS operates... Program § 153.630 Data validation requirements when HHS operates risk adjustment. (a) General requirement... performed on its risk adjustment data as described in this section. (b) Initial validation audit. (1) An...

  16. 37 CFR 1.704 - Reduction of period of adjustment of patent term.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Reduction of period of adjustment of patent term. 1.704 Section 1.704 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension...

  17. Surgeon length of service and risk-adjusted outcomes: linked observational analysis of the UK National Adult Cardiac Surgery Audit Registry and General Medical Council Register.

    PubMed

    Hickey, Graeme L; Grant, Stuart W; Freemantle, Nick; Cunningham, David; Munsch, Christopher M; Livesey, Steven A; Roxburgh, James; Buchan, Iain; Bridgewater, Ben

    2014-09-01

    To explore the relationship between in-hospital mortality following adult cardiac surgery and the time since primary clinical qualification for the responsible consultant cardiac surgeon (a proxy for experience). Retrospective analysis of prospectively collected national registry data over a 10-year period using mixed-effects multiple logistic regression modelling. Surgeon experience was defined as the time between the date of surgery and award of primary clinical qualification. UK National Health Service hospitals performing cardiac surgery between January 2003 and December 2012. All patients undergoing coronary artery bypass grafts and/or valve surgery under the care of a consultant cardiac surgeon. All-cause in-hospital mortality. A total of 292,973 operations performed by 273 consultant surgeons (with lengths of service from 11.2 to 42.0 years) were included. Crude mortality increased approximately linearly until 33 years service, before decreasing. After adjusting for case-mix and year of surgery, there remained a statistically significant (p=0.002) association between length of service and in-hospital mortality (odds ratio 1.013; 95% CI 1.005-1.021 for each year of 'experience'). Consultant cardiac surgeons take on increasingly complex surgery as they gain experience. With this progression, the incidence of adverse outcomes is expected to increase, as is demonstrated in this study. After adjusting for case-mix using the EuroSCORE, we observed an increased risk of mortality in patients operated on by longer serving surgeons. This finding may reflect under-adjustment for risk, unmeasured confounding or a real association. Further research into outcomes over the time course of surgeon's careers is required. © The Royal Society of Medicine.

  18. Linear systems with structure group and their feedback invariants

    NASA Technical Reports Server (NTRS)

    Martin, C.; Hermann, R.

    1977-01-01

    A general method described by Hermann and Martin (1976) for the study of the feedback invariants of linear systems is considered. It is shown that this method, which makes use of ideas of topology and algebraic geometry, is very useful in the investigation of feedback problems for which the classical methods are not suitable. The transfer function as a curve in the Grassmanian is examined. The general concepts studied in the context of specific systems and applications are organized in terms of the theory of Lie groups and algebraic geometry. Attention is given to linear systems which have a structure group, linear mechanical systems, and feedback invariants. The investigation shows that Lie group techniques are powerful and useful tools for analysis of the feedback structure of linear systems.

  19. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  20. Computer-aided linear-circuit design.

    NASA Technical Reports Server (NTRS)

    Penfield, P.

    1971-01-01

    Usually computer-aided design (CAD) refers to programs that analyze circuits conceived by the circuit designer. Among the services such programs should perform are direct network synthesis, analysis, optimization of network parameters, formatting, storage of miscellaneous data, and related calculations. The program should be embedded in a general-purpose conversational language such as BASIC, JOSS, or APL. Such a program is MARTHA, a general-purpose linear-circuit analyzer embedded in APL.

  1. 14 CFR Appendix - Example of SIFL Adjustment

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Example of SIFL Adjustment Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) POLICY STATEMENTS STATEMENTS OF GENERAL POLICY Policies Relating to Rates and Tariffs Treatment of deferred Federal income taxes for rate purposes. Pt. 399, Subpt. C,...

  2. 14 CFR Appendix - Example of SIFL Adjustment

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Example of SIFL Adjustment Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) POLICY STATEMENTS STATEMENTS OF GENERAL POLICY Policies Relating to Rates and Tariffs Treatment of deferred Federal income taxes for rate purposes. Pt. 399, Subpt. C,...

  3. 14 CFR Appendix - Example of SIFL Adjustment

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Example of SIFL Adjustment Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) POLICY STATEMENTS STATEMENTS OF GENERAL POLICY Policies Relating to Rates and Tariffs Treatment of deferred Federal income taxes for rate purposes. Pt. 399, Subpt. C,...

  4. Elliptically polarizing adjustable phase insertion device

    DOEpatents

    Carr, Roger

    1995-01-01

    An insertion device for extracting polarized electromagnetic energy from a beam of particles is disclosed. The insertion device includes four linear arrays of magnets which are aligned with the particle beam. The magnetic field strength to which the particles are subjected is adjusted by altering the relative alignment of the arrays in a direction parallel to that of the particle beam. Both the energy and polarization of the extracted energy may be varied by moving the relevant arrays parallel to the beam direction. The present invention requires a substantially simpler and more economical superstructure than insertion devices in which the magnetic field strength is altered by changing the gap between arrays of magnets.

  5. Elliptically polarizing adjustable phase insertion device

    DOEpatents

    Carr, R.

    1995-01-17

    An insertion device for extracting polarized electromagnetic energy from a beam of particles is disclosed. The insertion device includes four linear arrays of magnets which are aligned with the particle beam. The magnetic field strength to which the particles are subjected is adjusted by altering the relative alignment of the arrays in a direction parallel to that of the particle beam. Both the energy and polarization of the extracted energy may be varied by moving the relevant arrays parallel to the beam direction. The present invention requires a substantially simpler and more economical superstructure than insertion devices in which the magnetic field strength is altered by changing the gap between arrays of magnets. 3 figures.

  6. Relationship between neighbourhood socioeconomic position and neighbourhood public green space availability: An environmental inequality analysis in a large German city applying generalized linear models.

    PubMed

    Schüle, Steffen Andreas; Gabriel, Katharina M A; Bolte, Gabriele

    2017-06-01

    The environmental justice framework states that besides environmental burdens also resources may be social unequally distributed both on the individual and on the neighbourhood level. This ecological study investigated whether neighbourhood socioeconomic position (SEP) was associated with neighbourhood public green space availability in a large German city with more than 1 million inhabitants. Two different measures were defined for green space availability. Firstly, percentage of green space within neighbourhoods was calculated with the additional consideration of various buffers around the boundaries. Secondly, percentage of green space was calculated based on various radii around the neighbourhood centroid. An index of neighbourhood SEP was calculated with principal component analysis. Log-gamma regression from the group of generalized linear models was applied in order to consider the non-normal distribution of the response variable. All models were adjusted for population density. Low neighbourhood SEP was associated with decreasing neighbourhood green space availability including 200m up to 1000m buffers around the neighbourhood boundaries. Low neighbourhood SEP was also associated with decreasing green space availability based on catchment areas measured from neighbourhood centroids with different radii (1000m up to 3000 m). With an increasing radius the strength of the associations decreased. Social unequally distributed green space may amplify environmental health inequalities in an urban context. Thus, the identification of vulnerable neighbourhoods and population groups plays an important role for epidemiological research and healthy city planning. As a methodical aspect, log-gamma regression offers an adequate parametric modelling strategy for positively distributed environmental variables. Copyright © 2017 Elsevier GmbH. All rights reserved.

  7. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  8. Linearization instability for generic gravity in AdS spacetime

    NASA Astrophysics Data System (ADS)

    Altas, Emel; Tekin, Bayram

    2018-01-01

    In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.

  9. A generalization of the Becker model in linear viscoelasticity: creep, relaxation and internal friction

    NASA Astrophysics Data System (ADS)

    Mainardi, Francesco; Masina, Enrico; Spada, Giorgio

    2018-02-01

    We present a new rheological model depending on a real parameter ν \\in [0,1], which reduces to the Maxwell body for ν =0 and to the Becker body for ν =1. The corresponding creep law is expressed in an integral form in which the exponential function of the Becker model is replaced and generalized by a Mittag-Leffler function of order ν . Then the corresponding non-dimensional creep function and its rate are studied as functions of time for different values of ν in order to visualize the transition from the classical Maxwell body to the Becker body. Based on the hereditary theory of linear viscoelasticity, we also approximate the relaxation function by solving numerically a Volterra integral equation of the second kind. In turn, the relaxation function is shown versus time for different values of ν to visualize again the transition from the classical Maxwell body to the Becker body. Furthermore, we provide a full characterization of the new model by computing, in addition to the creep and relaxation functions, the so-called specific dissipation Q^{-1} as a function of frequency, which is of particular relevance for geophysical applications.

  10. Parental adjustment and attitudes to parenting after in vitro fertilization.

    PubMed

    Gibson, F L; Ungerer, J A; Tennant, C C; Saunders, D M

    2000-03-01

    To examine the psychosocial and parenthood-specific adjustment and attitudes to parenting at 1 year postpartum of IVF parents. Prospective, controlled study. Volunteers in a teaching hospital environment. Sixty-five primiparous women with singleton IVF pregnancies and their partners, and a control group of 61 similarly aged primiparous women with no history of infertility and their partners. Completion of questionnaires and interviews. Parent reports of general and parenthood-specific adjustment and attitudes to parenting. The IVF mothers tended to report lower self-esteem and less parenting competence than control mothers. Although there were no group differences on protectiveness, IVF mothers saw their children as significantly more vulnerable and "special" compared with controls. The IVF fathers reported significantly lower self-esteem and marital satisfaction, although not less competence in parenting. Both IVF mothers and fathers did not differ from control parents on other measures of general adjustment (mood) or those more specific to parenthood (e.g., attachment to the child and attitudes to child rearing). The IVF parents' adjustment to parenthood is similar to naturally conceiving comparison families. Nonetheless, there are minor IVF differences that reflect heightened child-focused concern and less confidence in parenting for mothers, less satisfaction with the marriage for the fathers, and vulnerable self-esteem for both parents.

  11. Wronskian solutions of the T-, Q- and Y-systems related to infinite dimensional unitarizable modules of the general linear superalgebra gl (M | N)

    NASA Astrophysics Data System (ADS)

    Tsuboi, Zengo

    2013-05-01

    In [1] (Z. Tsuboi, Nucl. Phys. B 826 (2010) 399, arxiv:arXiv:0906.2039), we proposed Wronskian-like solutions of the T-system for [ M , N ]-hook of the general linear superalgebra gl (M | N). We have generalized these Wronskian-like solutions to the ones for the general T-hook, which is a union of [M1 ,N1 ]-hook and [M2 ,N2 ]-hook (M =M1 +M2, N =N1 +N2). These solutions are related to Weyl-type supercharacter formulas of infinite dimensional unitarizable modules of gl (M | N). Our solutions also include a Wronskian-like solution discussed in [2] (N. Gromov, V. Kazakov, S. Leurent, Z. Tsuboi, JHEP 1101 (2011) 155, arxiv:arXiv:1010.2720) in relation to the AdS5 /CFT4 spectral problem.

  12. Generalized Heisenberg algebra and (non linear) pseudo-bosons

    NASA Astrophysics Data System (ADS)

    Bagarello, F.; Curado, E. M. F.; Gazeau, J. P.

    2018-04-01

    We propose a deformed version of the generalized Heisenberg algebra by using techniques borrowed from the theory of pseudo-bosons. In particular, this analysis is relevant when non self-adjoint Hamiltonians are needed to describe a given physical system. We also discuss relations with nonlinear pseudo-bosons. Several examples are discussed.

  13. The brain adjusts grip forces differently according to gravity and inertia: a parabolic flight experiment

    PubMed Central

    White, Olivier

    2015-01-01

    In everyday life, one of the most frequent activities involves accelerating and decelerating an object held in precision grip. In many contexts, humans scale and synchronize their grip force (GF), normal to the finger/object contact, in anticipation of the expected tangential load force (LF), resulting from the combination of the gravitational and the inertial forces. In many contexts, GF and LF are linearly coupled. A few studies have examined how we adjust the parameters–gain and offset–of this linear relationship. However, the question remains open as to how the brain adjusts GF regardless of whether LF is generated by different combinations of weight and inertia. Here, we designed conditions to generate equivalent magnitudes of LF by independently varying mass and movement frequency. In a control experiment, we directly manipulated gravity in parabolic flights, while other factors remained constant. We show with a simple computational approach that, to adjust GF, the brain is sensitive to how LFs are produced at the fingertips. This provides clear evidence that the analysis of the origin of LF is performed centrally, and not only at the periphery. PMID:25717293

  14. ADHD Symptomatology and Adjustment to College in China and the United States

    ERIC Educational Resources Information Center

    Norvilitis, Jill M.; Sun, Ling; Zhang, Jie

    2010-01-01

    This study examined ADHD symptomatology and college adjustment in 420 participants--147 from the United States and 273 from China. It was hypothesized that higher levels of ADHD symptoms in general and the inattentive symptom group in particular would be related to decreased academic and social adjustment, career decision-making self-efficacy, and…

  15. 26 CFR 1.56-1 - Adjustment for the book income of corporations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 1 2013-04-01 2013-04-01 false Adjustment for the book income of corporations... TAX INCOME TAXES Tax Surcharge § 1.56-1 Adjustment for the book income of corporations. (a) Computation of the book income adjustment—(1) In general. For taxable years beginning in 1987, 1988, and 1989...

  16. 26 CFR 1.56-1 - Adjustment for the book income of corporations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 1 2012-04-01 2012-04-01 false Adjustment for the book income of corporations... TAX INCOME TAXES Tax Surcharge § 1.56-1 Adjustment for the book income of corporations. (a) Computation of the book income adjustment—(1) In general. For taxable years beginning in 1987, 1988, and 1989...

  17. Finite-time H∞ filtering for non-linear stochastic systems

    NASA Astrophysics Data System (ADS)

    Hou, Mingzhe; Deng, Zongquan; Duan, Guangren

    2016-09-01

    This paper describes the robust H∞ filtering analysis and the synthesis of general non-linear stochastic systems with finite settling time. We assume that the system dynamic is modelled by Itô-type stochastic differential equations of which the state and the measurement are corrupted by state-dependent noises and exogenous disturbances. A sufficient condition for non-linear stochastic systems to have the finite-time H∞ performance with gain less than or equal to a prescribed positive number is established in terms of a certain Hamilton-Jacobi inequality. Based on this result, the existence of a finite-time H∞ filter is given for the general non-linear stochastic system by a second-order non-linear partial differential inequality, and the filter can be obtained by solving this inequality. The effectiveness of the obtained result is illustrated by a numerical example.

  18. Breadth of Extracurricular Participation and Adolescent Adjustment Among African-American and European-American Youth

    PubMed Central

    Fredricks, Jennifer A.; Eccles, Jacquelynne S.

    2012-01-01

    We examined the linear and nonlinear relations between breadth of extracurricular participation in 11th grade and developmental outcomes at 11th grade and 1 year after high school in an economically diverse sample of African-American and European-American youth. In general, controlling for demographic factors, children's motivation, and the dependent variable measured 3 years earlier, breadth was positively associated with indicators of academic adjustment at 11th grade and at 1 year after high school. In addition, for the three academic outcomes (i.e., grades, educational expectations, and educational status) the nonlinear function was significant; at high levels of involvement the well-being of youth leveled off or declined slightly. In addition, breadth of participation at 11th grade predicted lower internalizing behavior, externalizing behavior, alcohol use, and marijuana use at 11th grade. Finally, the total number of extracurricular activities at 11th grade was associated with civic engagement 2 years later. PMID:22837637

  19. Breadth of Extracurricular Participation and Adolescent Adjustment Among African-American and European-American Youth.

    PubMed

    Fredricks, Jennifer A; Eccles, Jacquelynne S

    2010-06-01

    We examined the linear and nonlinear relations between breadth of extracurricular participation in 11th grade and developmental outcomes at 11th grade and 1 year after high school in an economically diverse sample of African-American and European-American youth. In general, controlling for demographic factors, children's motivation, and the dependent variable measured 3 years earlier, breadth was positively associated with indicators of academic adjustment at 11th grade and at 1 year after high school. In addition, for the three academic outcomes (i.e., grades, educational expectations, and educational status) the nonlinear function was significant; at high levels of involvement the well-being of youth leveled off or declined slightly. In addition, breadth of participation at 11th grade predicted lower internalizing behavior, externalizing behavior, alcohol use, and marijuana use at 11th grade. Finally, the total number of extracurricular activities at 11th grade was associated with civic engagement 2 years later.

  20. Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments

    PubMed Central

    Forster, Sarah E.; Cho, Raymond Y.

    2014-01-01

    There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900

  1. Substantial shifts in ranking of California hospitals by hospital-associated methicillin-resistant Staphylococcus aureus infection following adjustment for hospital characteristics and case mix.

    PubMed

    Tehrani, David M; Phelan, Michael J; Cao, Chenghua; Billimek, John; Datta, Rupak; Nguyen, Hoanglong; Kwark, Homin; Huang, Susan S

    2014-10-01

    States have established public reporting of hospital-associated (HA) infections-including those of methicillin-resistant Staphylococcus aureus (MRSA)-but do not account for hospital case mix or postdischarge events. Identify facility-level characteristics associated with HA-MRSA infection admissions and create adjusted hospital rankings. A retrospective cohort study of 2009-2010 California acute care hospitals. We defined HA-MRSA admissions as involving MRSA pneumonia or septicemia events arising during hospitalization or within 30 days after discharge. We used mandatory hospitalization and US Census data sets to generate hospital population characteristics by summarizing across admissions. Facility-level factors associated with hospitals' proportions of HA-MRSA infection admissions were identified using generalized linear models. Using state methodology, hospitals were categorized into 3 tiers of HA-MRSA infection prevention performance, using raw and adjusted values. Among 323 hospitals, a median of 16 HA-MRSA infections (range, 0-102) per 10,000 admissions was found. Hospitals serving a greater proportion of patients who had serious comorbidities, were from low-education zip codes, and were discharged to locations other than home were associated with higher HA-MRSA infection risk. Total concordance between all raw and adjusted hospital rankings was 0.45 (95% confidence interval, 0.40-0.51). Among 53 community hospitals in the poor-performance category, more than 20% moved into the average-performance category after adjustment. Similarly, among 71 hospitals in the superior-performance category, half moved into the average-performance category after adjustment. When adjusting for nonmodifiable facility characteristics and case mix, hospital rankings based on HA-MRSA infections substantially changed. Quality indicators for hospitals require adequate adjustment for patient population characteristics for valid interhospital performance comparisons.

  2. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  3. How can general practitioners establish 'place attachment' in Australia's Northern Territory? Adjustment trumps adaptation.

    PubMed

    Auer, K; Carson, D

    2010-01-01

    Retention of GPs in the more remote parts of Australia remains an important issue in workforce planning. The Northern Territory of Australia experiences very high rates of staff turnover. This research examined how the process of forming 'place attachment' between GP and practice location might influence prospects for retention. It examines whether GPs use 'adjustment' (short term trade-offs between work and lifestyle ambitions) or 'adaptation' (attempts to change themselves and their environment to fulfil lifestyle ambitions) strategies to cope with the move to new locations. 19 semi-structured interviews were conducted mostly with GPs who had been in the Northern Territory for less than 3 years. Participants were asked about the strategies they used in an attempt to establish place attachment. Strategies could be structural (work related), personal, social or environmental. There were strong structural motivators for GPs to move to the Northern Territory. These factors were seen as sufficiently attractive to permit the setting aside of other lifestyle ambitions for a short period of time. Respondents found the environmental aspects of life in remote areas to be the most satisfying outside work. Social networks were temporary and the need to re-establish previous networks was the primary driver of out migration. GPs primarily use adjustment strategies to temporarily secure their position within their practice community. There were few examples of adaptation strategies that would facilitate a longer term match between the GPs' overall life ambitions and the characteristics of the community. While this suggests that lengths of stay will continue to be short, better adjustment skills might increase the potential for repeat service and limit the volume of unplanned early exits.

  4. Quality-of-life-adjusted hazard of death: a formulation of the quality-adjusted life-years model of use in benefit-risk assessment.

    PubMed

    Garcia-Hernandez, Alberto

    2014-03-01

    Although the quality-adjusted life-years (QALY) model is standard in health technology assessment, quantitative methods are less frequent but increasingly used for benefit-risk assessment (BRA) at earlier stages of drug development. A frequent challenge when implementing metrics for BRA is to weigh the importance of effects on a chronic condition against the risk of severe events during the trial. The lifetime component of the QALY model has a counterpart in the BRA context, namely, the risk of dying during the study. A new concept is presented, the hazard of death function that a subject is willing to accept instead of the baseline hazard to improve his or her chronic health status, which we have called the quality-of-life-adjusted hazard of death. It has been proven that if assumptions of the linear QALY model hold, the excess mortality rate tolerated by a subject for a chronic health improvement is inversely proportional to the mean residual life. This result leads to a new representation of the linear QALY model in terms of hazard rate functions and allows utilities obtained by using standard methods involving trade-offs of life duration to be translated into thresholds of tolerated mortality risk during a short period of time, thereby avoiding direct trade-offs using small probabilities of events during the study, which is known to lead to bias and variability. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. 26 CFR 1.56-1 - Adjustment for the book income of corporations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 1 2011-04-01 2009-04-01 true Adjustment for the book income of corporations. 1... INCOME TAXES Tax Surcharge § 1.56-1 Adjustment for the book income of corporations. (a) Computation of the book income adjustment—(1) In general. For taxable years beginning in 1987, 1988, and 1989, the...

  6. 26 CFR 1.56-1 - Adjustment for the book income of corporations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 1 2014-04-01 2013-04-01 true Adjustment for the book income of corporations. 1... INCOME TAXES Tax Surcharge § 1.56-1 Adjustment for the book income of corporations. (a) Computation of the book income adjustment—(1) In general. For taxable years beginning in 1987, 1988, and 1989, the...

  7. Measuring cancer-specific child adjustment difficulties: Development and validation of the Children's Oncology Child Adjustment Scale (ChOCs).

    PubMed

    Burke, Kylie; McCarthy, Maria; Lowe, Cherie; Sanders, Matthew R; Lloyd, Erin; Bowden, Madeleine; Williams, Lauren

    2017-03-01

    Childhood cancer is associated with child adjustment difficulties including, eating and sleep disturbance, and emotional and other behavioral difficulties. However, there is a lack of validated instruments to measure the specific child adjustment issues associated with pediatric cancer treatments. The aim of this study was to develop and evaluate the reliability and validity of a parent-reported, child adjustment scale. One hundred thirty-two parents from two pediatric oncology centers who had children (aged 2-10 years) diagnosed with cancer completed the newly developed measure and additional measures of child behavior, sleep, diet, and quality of life. Children were more than 4 weeks postdiagnosis and less than 12 months postactive treatment. Factor structure, internal consistency, and construct (convergent) validity analyses were conducted. Principal component analysis revealed five distinct and theoretically coherent factors: Sleep Difficulties, Impact of Child's Illness, Eating Difficulties, Hospital-Related Behavior Difficulties, and General Behavior Difficulties. The final 25-item measure, the Children's Oncology Child Adjustment Scale (ChOCs), demonstrated good internal consistency (α = 0.79-0.91). Validity of the ChOCs was demonstrated by significant correlations between the subscales and measures of corresponding constructs. The ChOCs provides a new measure of child adjustment difficulties designed specifically for pediatric oncology. Preliminary analyses indicate strong theoretical and psychometric properties. Future studies are required to further examine reliability and validity of the scale, including test-retest reliability, discriminant validity, as well as change sensitivity and generalizability across different oncology samples and ages of children. The ChOCs shows promise as a measure of child adjustment relevant for oncology clinical settings and research purposes. © 2016 Wiley Periodicals, Inc.

  8. On differences of linear positive operators

    NASA Astrophysics Data System (ADS)

    Aral, Ali; Inoan, Daniela; Raşa, Ioan

    2018-04-01

    In this paper we consider two different general linear positive operators defined on unbounded interval and obtain estimates for the differences of these operators in quantitative form. Our estimates involve an appropriate K-functional and a weighted modulus of smoothness. Similar estimates are obtained for Chebyshev functional of these operators as well. All considerations are based on rearrangement of the remainder in Taylor's formula. The obtained results are applied for some well known linear positive operators.

  9. Soil-adjusted sorption isotherms for arsenic(V) and vanadium(V)

    NASA Astrophysics Data System (ADS)

    Rückamp, Daniel; Utermann, Jens; Florian Stange, Claus

    2017-04-01

    The sorption characteristic of a soil is usually determined by fitting a sorption isotherm model to laboratory data. However, such sorption isotherms are only valid for the studied soil and cannot be transferred to other soils. For this reason, a soil-adjusted sorption isotherm can be calculated by using the data of several soils. Such soil-adjusted sorption isotherms exist for cationic heavy metals, but are lacking for heavy metal oxyanions. Hence, the aim of this study is to establish soil-adjusted sorption isotherms for the oxyanions arsenate (arsenic(V)) and vanadate (vanadium(V)). For the laboratory experiment, 119 soils (samples from top- and subsoils) typical for Germany were chosen. The batch experiments were conducted with six concentrations of arsenic(V) and vanadium(V), respectively. By using the laboratory data, sorption isotherms for each soil were derived. Then, the soil-adjusted sorption isotherms were calculated by non-linear regression of the sorption isotherms with additional soil parameters. The results indicated a correlation between the sorption strength and oxalate-extractable iron, organic carbon, clay, and electrical conductivity for both, arsenic and vanadium. However, organic carbon had a negative regression coefficient. As total organic carbon was correlated with dissolved organic carbon; we attribute this observation to an effect of higher amounts of dissolved organic substances. We conclude that these soil-adjusted sorption isotherms can be used to assess the potential of soils to adsorb arsenic(V) and vanadium(V) without performing time-consuming sorption experiments.

  10. Measuring the individual benefit of a medical or behavioral treatment using generalized linear mixed-effects models.

    PubMed

    Diaz, Francisco J

    2016-10-15

    We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Derivation and definition of a linear aircraft model

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1988-01-01

    A linear aircraft model for a rigid aircraft of constant mass flying over a flat, nonrotating earth is derived and defined. The derivation makes no assumptions of reference trajectory or vehicle symmetry. The linear system equations are derived and evaluated along a general trajectory and include both aircraft dynamics and observation variables.

  12. Personality predictors of dimensions of psychosocial adjustment after surgery.

    PubMed

    Weinryb, R M; Gustavsson, J P; Barber, J P

    1997-01-01

    Although many studies have examined the relationship between personality factors and adjustment after surgery, most of them have had very short follow-up periods. The present prospective study examines whether preoperative psychodynamic assessment of personality traits enhances prediction of various areas of psychosocial adjustment assessed at least 1 year after surgery. In 53 patients undergoing pelvic pouch surgery for ulcerative colitis, we examined the relationship between personality traits measured before surgery, and postoperative psychosocial adjustment assessed at a median of 17 months postoperatively, controlling for the effect of surgical functional outcome. Personality traits were assessed with the Karolinska Psychodynamic Profile (KAPP). Surgical functional outcome scales and the Psychosocial Adjustment to Illness Scale (PAIS) were used. Problems with sexual satisfaction, perfectionistic body ideals, lack of alexithymia, and poor frustration tolerance predicted poor postoperative adjustment in various areas, beyond what was predicted by surgical functional outcome alone. Moreover, moderate preoperative levels of alexithymia were beneficial to postoperative adjustment in the area of psychological distress. The findings suggest that the preoperative assessment of the patient's long-term sexual functioning and satisfaction, the importance attached to his or her appearance, level of alexithymia, and general capacity to tolerate frustration and set-backs in life, might alert both the surgeon and the patient to potential risk factors for poor postsurgical adjustment.

  13. Asymptotic Stability of Interconnected Passive Non-Linear Systems

    NASA Technical Reports Server (NTRS)

    Isidori, A.; Joshi, S. M.; Kelkar, A. G.

    1999-01-01

    This paper addresses the problem of stabilization of a class of internally passive non-linear time-invariant dynamic systems. A class of non-linear marginally strictly passive (MSP) systems is defined, which is less restrictive than input-strictly passive systems. It is shown that the interconnection of a non-linear passive system and a non-linear MSP system is globally asymptotically stable. The result generalizes and weakens the conditions of the passivity theorem, which requires one of the systems to be input-strictly passive. In the case of linear time-invariant systems, it is shown that the MSP property is equivalent to the marginally strictly positive real (MSPR) property, which is much simpler to check.

  14. Meta-analysis for the comparison of two diagnostic tests to a common gold standard: A generalized linear mixed model approach.

    PubMed

    Hoyer, Annika; Kuss, Oliver

    2018-05-01

    Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.

  15. MO-F-16A-02: Simulation of a Medical Linear Accelerator for Teaching Purposes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlone, M; Lamey, M; Anderson, R

    Purpose: Detailed functioning of linear accelerator physics is well known. Less well developed is the basic understanding of how the adjustment of the linear accelerator's electrical components affects the resulting radiation beam. Other than the text by Karzmark, there is very little literature devoted to the practical understanding of linear accelerator functionality targeted at the radiotherapy clinic level. The purpose of this work is to describe a simulation environment for medical linear accelerators with the purpose of teaching linear accelerator physics. Methods: Varian type lineacs were simulated. Klystron saturation and peak output were modelled analytically. The energy gain of anmore » electron beam was modelled using load line expressions. The bending magnet was assumed to be a perfect solenoid whose pass through energy varied linearly with solenoid current. The dose rate calculated at depth in water was assumed to be a simple function of the target's beam current. The flattening filter was modelled as an attenuator with conical shape, and the time-averaged dose rate at a depth in water was determined by calculating kerma. Results: Fifteen analytical models were combined into a single model called SIMAC. Performance was verified systematically by adjusting typical linac control parameters. Increasing klystron pulse voltage increased dose rate to a peak, which then decreased as the beam energy was further increased due to the fixed pass through energy of the bending magnet. Increasing accelerator beam current leads to a higher dose per pulse. However, the energy of the electron beam decreases due to beam loading and so the dose rate eventually maximizes and the decreases as beam current was further increased. Conclusion: SIMAC can realistically simulate the functionality of a linear accelerator. It is expected to have value as a teaching tool for both medical physicists and linear accelerator service personnel.« less

  16. Goal management tendencies predict trajectories of adjustment to lower limb amputation up to 15 months post rehabilitation discharge.

    PubMed

    Coffey, Laura; Gallagher, Pamela; Desmond, Deirdre; Ryall, Nicola; Wegener, Stephen T

    2014-10-01

    To explore patterns of change in positive affect, general adjustment to lower-limb amputation, and self-reported disability from rehabilitation admission to 15 months postdischarge, and to examine whether goal pursuit and goal adjustment tendencies predict either initial status or rates of change in these outcomes, controlling for sociodemographic and clinical covariates. Prospective cohort study with 4 time points (t1: on admission; t2: 6wk postdischarge; t3: 6mo postdischarge; t4: 15mo postdischarge). Inpatient rehabilitation. Consecutive sample (N=98) of persons aged ≥18 years with major lower-limb amputation. Not applicable. Positive affect subscale of the Positive and Negative Affect Schedule; general adjustment subscale of the Trinity Amputation and Prosthesis Experience Scales-Revised; and World Health Organization Disability Assessment Schedule 2.0. Positive affect decreased from t1 to t4 for the overall sample, whereas general adjustment increased. Self-reported disability scores remained stable over this period. Stronger goal pursuit tendencies were associated with greater positive affect at t1, and stronger goal adjustment tendencies were associated with more favorable initial scores on each outcome examined. With regard to rates of change, stronger goal pursuit tendencies buffered against decreases in positive affect and promoted decreases in self-reported disability over time, whereas stronger goal adjustment tendencies enhanced increases in general adjustment to lower-limb amputation. Greater use of goal pursuit and goal adjustment strategies appears to promote more favorable adjustment to lower-limb amputation over time across a range of important rehabilitation outcomes. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. Modeling Learning in Doubly Multilevel Binary Longitudinal Data Using Generalized Linear Mixed Models: An Application to Measuring and Explaining Word Learning.

    PubMed

    Cho, Sun-Joo; Goodwin, Amanda P

    2016-04-01

    When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.

  18. Linear and Nonlinear Thinking: A Multidimensional Model and Measure

    ERIC Educational Resources Information Center

    Groves, Kevin S.; Vance, Charles M.

    2015-01-01

    Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…

  19. 20 CFR 229.51 - Adjustment of age reduction.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee becomes...

  20. 20 CFR 229.51 - Adjustment of age reduction.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee becomes...

  1. 20 CFR 229.51 - Adjustment of age reduction.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee becomes...

  2. 20 CFR 229.51 - Adjustment of age reduction.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee becomes...

  3. 20 CFR 229.51 - Adjustment of age reduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee becomes...

  4. Associations between faith, distress and mental adjustment--a Danish survivorship study.

    PubMed

    Johannessen-Henry, Christine Tind; Deltour, Isabelle; Bidstrup, Pernille Envold; Dalton, Susanne O; Johansen, Christoffer

    2013-02-01

    Several studies have suggested that religion and spirituality are important for overcoming psychological distress and adjusting mentally to cancer, but these studies did not differentiate between spiritual well-being and specific aspects of faith. We examined the extent to which spiritual well-being, the faith dimension of spiritual well-being and aspects of performed faith are associated with distress and mental adjustment among cancer patients. In a cross-sectional design, 1043 survivors of various cancers filled in a questionnaire on spiritual well-being (FACIT-Sp-12), specific aspects of faith ('belief in a god', 'belief in a god with whom I can talk' and 'experiences of god or a higher power'), religious community and church attendance (DUREL), distress (POMS-SF), adjustment to cancer (Mini-MAC) and sociodemographic factors. Linear regression models were used to analyze the associations between exposure (spiritual well-being and specific faith aspects) and outcome (distress and adjustment to cancer) with adjustment for age, gender, cancer diagnosis and physical and social well-being. Higher spiritual well-being was associated with less total distress (β = -0.79, CI -0.92; -0.66) and increased adjustment to cancer (fighting spirit, anxious preoccupation, helplessness-hopelessness). Specific aspects of faith were associated with high confusion-bewilderment and tension-anxiety, but also lower score on vigor-activity, and with higher anxious-preoccupation, both higher and lower cognitive avoidance, but also more fighting spirit. As hypothesized, spiritual well-being were associated with less distress and better mental adjustment. However, specific aspects of faith were both positively and negatively associated with distress and mental adjustment. The results illustrate the complexity of associations between spiritual well-being and specific aspects of faith with psychological function among cancer survivors.

  5. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  6. Linear Equating for the NEAT Design: Parameter Substitution Models and Chained Linear Relationship Models

    ERIC Educational Resources Information Center

    Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.

    2009-01-01

    This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…

  7. Adjustment Disorders as a Stress-Related Disorder: A Longitudinal Study of the Associations among Stress, Resources, and Mental Health

    PubMed Central

    Kocalevent, Rüya-Daniela; Mierke, Annett; Danzer, Gerhard; Klapp, Burghard F.

    2014-01-01

    Objective Adjustment disorders are re-conceptualized in the DSM-5 as a stress-related disorder; however, besides the impact of an identifiable stressor, the specification of a stress concept, remains unclear. This study is the first to examine an existing stress-model from the general population, in patients diagnosed with adjustment disorders, using a longitudinal design. Methods The study sample consisted of 108 patients consecutively admitted for adjustment disorders. Associations of stress perception, emotional distress, resources, and mental health were measured at three time points: the outpatients’ presentation, admission for inpatient treatment, and discharge from the hospital. To evaluate a longitudinal stress model of ADs, we examined whether stress at admission predicted mental health at each of the three time points using multiple linear regressions and structural equation modeling. A series of repeated-measures one-way analyses of variance (rANOVAs) was performed to assess change over time. Results Significant within-participant changes from baseline were observed between hospital admission and discharge with regard to mental health, stress perception, and emotional distress (p<0.001). Stress perception explained nearly half of the total variance (44%) of mental health at baseline; the adjusted R2 increased (0.48), taking emotional distress (i.e., depressive symptoms) into account. The best predictor of mental health at discharge was the level of emotional distress (i.e., anxiety level) at baseline (β = −0.23, R2 corr = 0.56, p<0.001). With a CFI of 0.86 and an NFI of 0.86, the fit indices did not allow for acceptance of the stress-model (Cmin/df = 15.26; RMSEA = 0.21). Conclusions Stress perception is an important predictor in adjustment disorders, and mental health-related treatment goals are dependent on and significantly impacted by stress perception and emotional distress. PMID:24825165

  8. The overlooked potential of Generalized Linear Models in astronomy-II: Gamma regression and photometric redshifts

    NASA Astrophysics Data System (ADS)

    Elliott, J.; de Souza, R. S.; Krone-Martins, A.; Cameron, E.; Ishida, E. E. O.; Hilbe, J.; COIN Collaboration

    2015-04-01

    Machine learning techniques offer a precious tool box for use within astronomy to solve problems involving so-called big data. They provide a means to make accurate predictions about a particular system without prior knowledge of the underlying physical processes of the data. In this article, and the companion papers of this series, we present the set of Generalized Linear Models (GLMs) as a fast alternative method for tackling general astronomical problems, including the ones related to the machine learning paradigm. To demonstrate the applicability of GLMs to inherently positive and continuous physical observables, we explore their use in estimating the photometric redshifts of galaxies from their multi-wavelength photometry. Using the gamma family with a log link function we predict redshifts from the PHoto-z Accuracy Testing simulated catalogue and a subset of the Sloan Digital Sky Survey from Data Release 10. We obtain fits that result in catastrophic outlier rates as low as ∼1% for simulated and ∼2% for real data. Moreover, we can easily obtain such levels of precision within a matter of seconds on a normal desktop computer and with training sets that contain merely thousands of galaxies. Our software is made publicly available as a user-friendly package developed in Python, R and via an interactive web application. This software allows users to apply a set of GLMs to their own photometric catalogues and generates publication quality plots with minimum effort. By facilitating their ease of use to the astronomical community, this paper series aims to make GLMs widely known and to encourage their implementation in future large-scale projects, such as the Large Synoptic Survey Telescope.

  9. A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Leonov, Arkady I.

    2002-01-01

    The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.

  10. 42 CFR 403.750 - Estimate of expenditures and adjustments.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 2 2011-10-01 2011-10-01 false Estimate of expenditures and adjustments. 403.750 Section 403.750 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS SPECIAL PROGRAMS AND PROJECTS Religious Nonmedical Health Care Institutions...

  11. 42 CFR 403.750 - Estimate of expenditures and adjustments.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 2 2014-10-01 2014-10-01 false Estimate of expenditures and adjustments. 403.750 Section 403.750 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS SPECIAL PROGRAMS AND PROJECTS Religious Nonmedical Health Care Institutions...

  12. 42 CFR 403.750 - Estimate of expenditures and adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Estimate of expenditures and adjustments. 403.750 Section 403.750 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS SPECIAL PROGRAMS AND PROJECTS Religious Nonmedical Health Care Institutions...

  13. 42 CFR 403.750 - Estimate of expenditures and adjustments.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 2 2012-10-01 2012-10-01 false Estimate of expenditures and adjustments. 403.750 Section 403.750 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS SPECIAL PROGRAMS AND PROJECTS Religious Nonmedical Health Care Institutions...

  14. 42 CFR 403.750 - Estimate of expenditures and adjustments.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 2 2013-10-01 2013-10-01 false Estimate of expenditures and adjustments. 403.750 Section 403.750 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS SPECIAL PROGRAMS AND PROJECTS Religious Nonmedical Health Care Institutions...

  15. 26 CFR 1.545-2 - Adjustments to taxable income.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ....545-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Personal Holding Companies § 1.545-2 Adjustments to taxable income. (a) Taxes—(1) General rule. (i) In computing undistributed personal holding company income for any taxable...

  16. Automated control of linear constricted plasma source array

    DOEpatents

    Anders, Andre; Maschwitz, Peter A.

    2000-01-01

    An apparatus and method for controlling an array of constricted glow discharge chambers are disclosed. More particularly a linear array of constricted glow plasma sources whose polarity and geometry are set so that the contamination and energy of the ions discharged from the sources are minimized. The several sources can be mounted in parallel and in series to provide a sustained ultra low source of ions in a plasma with contamination below practical detection limits. The quality of film along deposition "tracks" opposite the plasma sources can be measured and compared to desired absolute or relative values by optical and/or electrical sensors. Plasma quality can then be adjusted by adjusting the power current values, gas feed pressure/flow, gas mixtures or a combination of some or all of these to improve the match between the measured values and the desired values.

  17. Cotton-type and joint invariants for linear elliptic systems.

    PubMed

    Aslam, A; Mahomed, F M

    2013-01-01

    Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results.

  18. Cotton-Type and Joint Invariants for Linear Elliptic Systems

    PubMed Central

    Aslam, A.; Mahomed, F. M.

    2013-01-01

    Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results. PMID:24453871

  19. Linear and non-linear Modified Gravity forecasts with future surveys

    NASA Astrophysics Data System (ADS)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  20. Highly Adjustable Systems: An Architecture for Future Space Observatories

    NASA Astrophysics Data System (ADS)

    Arenberg, Jonathan; Conti, Alberto; Redding, David; Lawrence, Charles R.; Hachkowski, Roman; Laskin, Robert; Steeves, John

    2017-06-01

    Mission costs for ground breaking space astronomical observatories are increasing to the point of unsustainability. We are investigating the use of adjustable or correctable systems as a means to reduce development and therefore mission costs. The poster introduces the promise and possibility of realizing a “net zero CTE” system for the general problem of observatory design and introduces the basic systems architecture we are considering. This poster concludes with an overview of our planned study and demonstrations for proving the value and worth of highly adjustable telescopes and systems ahead of the upcoming decadal survey.

  1. Adjusting Health Expenditures for Inflation: A Review of Measures for Health Services Research in the United States.

    PubMed

    Dunn, Abe; Grosse, Scott D; Zuvekas, Samuel H

    2018-02-01

    To provide guidance on selecting the most appropriate price index for adjusting health expenditures or costs for inflation. Major price index series produced by federal statistical agencies. We compare the key characteristics of each index and develop suggestions on specific indexes to use in many common situations and general guidance in others. Price series and methodological documentation were downloaded from federal websites and supplemented with literature scans. The gross domestic product implicit price deflator or the overall Personal Consumption Expenditures (PCE) index is preferable to the Consumer Price Index (CPI-U) to adjust for general inflation, in most cases. The Personal Health Care (PHC) index or the PCE health-by-function index is generally preferred to adjust total medical expenditures for inflation. The CPI medical care index is preferred for the adjustment of consumer out-of-pocket expenditures for inflation. A new, experimental disease-specific Medical Care Expenditure Index is now available to adjust payments for disease treatment episodes. There is no single gold standard for adjusting health expenditures for inflation. Our discussion of best practices can help researchers select the index best suited to their study. © Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  2. Simulation of dynamics of beam structures with bolted joints using adjusted Iwan beam elements

    NASA Astrophysics Data System (ADS)

    Song, Y.; Hartwigsen, C. J.; McFarland, D. M.; Vakakis, A. F.; Bergman, L. A.

    2004-05-01

    Mechanical joints often affect structural response, causing localized non-linear stiffness and damping changes. As many structures are assemblies, incorporating the effects of joints is necessary to produce predictive finite element models. In this paper, we present an adjusted Iwan beam element (AIBE) for dynamic response analysis of beam structures containing joints. The adjusted Iwan model consists of a combination of springs and frictional sliders that exhibits non-linear behavior due to the stick-slip characteristic of the latter. The beam element developed is two-dimensional and consists of two adjusted Iwan models and maintains the usual complement of degrees of freedom: transverse displacement and rotation at each of the two nodes. The resulting element includes six parameters, which must be determined. To circumvent the difficulty arising from the non-linear nature of the inverse problem, a multi-layer feed-forward neural network (MLFF) is employed to extract joint parameters from measured structural acceleration responses. A parameter identification procedure is implemented on a beam structure with a bolted joint. In this procedure, acceleration responses at one location on the beam structure due to one known impulsive forcing function are simulated for sets of combinations of varying joint parameters. A MLFF is developed and trained using the patterns of envelope data corresponding to these acceleration histories. The joint parameters are identified through the trained MLFF applied to the measured acceleration response. Then, using the identified joint parameters, acceleration responses of the jointed beam due to a different impulsive forcing function are predicted. The validity of the identified joint parameters is assessed by comparing simulated acceleration responses with experimental measurements. The capability of the AIBE to capture the effects of bolted joints on the dynamic responses of beam structures, and the efficacy of the MLFF parameter

  3. Using Linear and Quadratic Functions to Teach Number Patterns in Secondary School

    ERIC Educational Resources Information Center

    Kenan, Kok Xiao-Feng

    2017-01-01

    This paper outlines an approach to definitively find the general term in a number pattern, of either a linear or quadratic form, by using the general equation of a linear or quadratic function. This approach is governed by four principles: (1) identifying the position of the term (input) and the term itself (output); (2) recognising that each…

  4. Rank-based estimation in the {ell}1-regularized partly linear model for censored outcomes with application to integrated analyses of clinical predictors and gene expression data.

    PubMed

    Johnson, Brent A

    2009-10-01

    We consider estimation and variable selection in the partial linear model for censored data. The partial linear model for censored data is a direct extension of the accelerated failure time model, the latter of which is a very important alternative model to the proportional hazards model. We extend rank-based lasso-type estimators to a model that may contain nonlinear effects. Variable selection in such partial linear model has direct application to high-dimensional survival analyses that attempt to adjust for clinical predictors. In the microarray setting, previous methods can adjust for other clinical predictors by assuming that clinical and gene expression data enter the model linearly in the same fashion. Here, we select important variables after adjusting for prognostic clinical variables but the clinical effects are assumed nonlinear. Our estimator is based on stratification and can be extended naturally to account for multiple nonlinear effects. We illustrate the utility of our method through simulation studies and application to the Wisconsin prognostic breast cancer data set.

  5. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  6. Do insurers respond to risk adjustment? A long-term, nationwide analysis from Switzerland.

    PubMed

    von Wyl, Viktor; Beck, Konstantin

    2016-03-01

    Community rating in social health insurance calls for risk adjustment in order to eliminate incentives for risk selection. Swiss risk adjustment is known to be insufficient, and substantial risk selection incentives remain. This study develops five indicators to monitor residual risk selection. Three indicators target activities of conglomerates of insurers (with the same ownership), which steer enrollees into specific carriers based on applicants' risk profiles. As a proxy for their market power, those indicators estimate the amount of premium-, health care cost-, and risk-adjustment transfer variability that is attributable to conglomerates. Two additional indicators, derived from linear regression, describe the amount of residual cost differences between insurers that are not covered by risk adjustment. All indicators measuring conglomerate-based risk selection activities showed increases between 1996 and 2009, paralleling the establishment of new conglomerates. At their maxima in 2009, the indicator values imply that 56% of the net risk adjustment volume, 34% of premium variability, and 51% cost variability in the market were attributable to conglomerates. From 2010 onwards, all indicators decreased, coinciding with a pre-announced risk adjustment reform implemented in 2012. Likewise, the regression-based indicators suggest that the volume and variance of residual cost differences between insurers that are not equaled out by risk adjustment have decreased markedly since 2009 as a result of the latest reform. Our analysis demonstrates that risk-selection, especially by conglomerates, is a real phenomenon in Switzerland. However, insurers seem to have reduced risk selection activities to optimize their losses and gains from the latest risk adjustment reform.

  7. Significance of adjusting salt intake by body weight in the evaluation of dietary salt and blood pressure.

    PubMed

    Hashimoto, Tomomi; Takase, Hiroyuki; Okado, Tateo; Sugiura, Tomonori; Yamashita, Sumiyo; Kimura, Genjiro; Ohte, Nobuyuki; Dohi, Yasuaki

    2016-08-01

    The close association between dietary salt and hypertension is well established. However, previous studies generally assessed salt intake without adjustment for body weight. Herein, we investigated the significance of body weight-adjusted salt intake in the general population. The present cross-sectional study included 7629 participants from our yearly physical checkup program, and their salt intake was assessed using a spot urine test to estimate 24-hour urinary salt excretion. Total salt intake increased with increasing body weight. Body weight-adjusted salt intake was greater in participants with hypertension than in those without hypertension. Systolic blood pressure, estimated glomerular filtration rate, and urinary albumin were independently correlated with body weight-adjusted salt intake after adjustment for possible cardiovascular risk factors. Excessive body weight-adjusted salt intake could be related to an increase in blood pressure and hypertensive organ damage. Adjustment for body weight might therefore provide clinically important information when assessing individual salt intake. Copyright © 2016 American Society of Hypertension. Published by Elsevier Inc. All rights reserved.

  8. Just Another Club? The Distinctiveness of the Relation between Religious Service Attendance and Adolescent Psychosocial Adjustment

    ERIC Educational Resources Information Center

    Good, Marie; Willoughby, Teena; Fritjers, Jan

    2009-01-01

    This study used hierarchical linear modeling to compare longitudinal patterns of adolescent religious service attendance and club attendance, and to contrast the longitudinal relations between adolescent adjustment and religious service versus club attendance. Participants included 1050 students (47% girls) encompassing a school district in…

  9. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  10. Internet-Based Self-Help Intervention for ICD-11 Adjustment Disorder: Preliminary Findings.

    PubMed

    Eimontas, Jonas; Rimsaite, Zivile; Gegieckaite, Goda; Zelviene, Paulina; Kazlauskas, Evaldas

    2018-06-01

    Adjustment disorder is one of the most diagnosed mental disorders. However, there is a lack of studies of specialized internet-based psychosocial interventions for adjustment disorder. We aimed to analyze the outcomes of an internet-based unguided self-help psychosocial intervention BADI for adjustment disorder in a two armed randomized controlled trial with a waiting list control group. In total 284 adult participants were randomized in this study. We measured adjustment disorder as a primary outcome, and psychological well-being as a secondary outcome at pre-intervention (T1) and one month after the intervention (T2). We found medium effect size of the intervention for the completer sample on adjustment disorder symptoms. Intervention was effective for those participants who used it at least one time in 30-day period. Our results revealed the potential of unguided internet-based self-help intervention for adjustment disorder. However, high dropout rates in the study limits the generalization of the outcomes of the intervention only to completers.

  11. Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul G.

    2000-01-01

    The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.

  12. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Deformation-Aware Log-Linear Models

    NASA Astrophysics Data System (ADS)

    Gass, Tobias; Deselaers, Thomas; Ney, Hermann

    In this paper, we present a novel deformation-aware discriminative model for handwritten digit recognition. Unlike previous approaches our model directly considers image deformations and allows discriminative training of all parameters, including those accounting for non-linear transformations of the image. This is achieved by extending a log-linear framework to incorporate a latent deformation variable. The resulting model has an order of magnitude less parameters than competing approaches to handling image deformations. We tune and evaluate our approach on the USPS task and show its generalization capabilities by applying the tuned model to the MNIST task. We gain interesting insights and achieve highly competitive results on both tasks.

  14. Linear-time reconstruction of zero-recombinant Mendelian inheritance on pedigrees without mating loops.

    PubMed

    Liu, Lan; Jiang, Tao

    2007-01-01

    With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the

  15. General Multivariate Linear Modeling of Surface Shapes Using SurfStat

    PubMed Central

    Chung, Moo K.; Worsley, Keith J.; Nacewicz, Brendon, M.; Dalton, Kim M.; Davidson, Richard J.

    2010-01-01

    Although there are many imaging studies on traditional ROI-based amygdala volumetry, there are very few studies on modeling amygdala shape variations. This paper present a unified computational and statistical framework for modeling amygdala shape variations in a clinical population. The weighted spherical harmonic representation is used as to parameterize, to smooth out, and to normalize amygdala surfaces. The representation is subsequently used as an input for multivariate linear models accounting for nuisance covariates such as age and brain size difference using SurfStat package that completely avoids the complexity of specifying design matrices. The methodology has been applied for quantifying abnormal local amygdala shape variations in 22 high functioning autistic subjects. PMID:20620211

  16. Linear SFM: A hierarchical approach to solving structure-from-motion problems by decoupling the linear and nonlinear components

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Huang, Shoudong; Dissanayake, Gamini

    2018-07-01

    This paper presents a novel hierarchical approach to solving structure-from-motion (SFM) problems. The algorithm begins with small local reconstructions based on nonlinear bundle adjustment (BA). These are then joined in a hierarchical manner using a strategy that requires solving a linear least squares optimization problem followed by a nonlinear transform. The algorithm can handle ordered monocular and stereo image sequences. Two stereo images or three monocular images are adequate for building each initial reconstruction. The bulk of the computation involves solving a linear least squares problem and, therefore, the proposed algorithm avoids three major issues associated with most of the nonlinear optimization algorithms currently used for SFM: the need for a reasonably accurate initial estimate, the need for iterations, and the possibility of being trapped in a local minimum. Also, by summarizing all the original observations into the small local reconstructions with associated information matrices, the proposed Linear SFM manages to preserve all the information contained in the observations. The paper also demonstrates that the proposed problem formulation results in a sparse structure that leads to an efficient numerical implementation. The experimental results using publicly available datasets show that the proposed algorithm yields solutions that are very close to those obtained using a global BA starting with an accurate initial estimate. The C/C++ source code of the proposed algorithm is publicly available at https://github.com/LiangZhaoPKUImperial/LinearSFM.

  17. Age, Acculturation, Cultural Adjustment, and Mental Health Symptoms of Chinese, Korean, and Japanese Immigrant Youths.

    ERIC Educational Resources Information Center

    Yeh, Christine J.

    2003-01-01

    This study of Japanese, Chinese, and Korean immigrant junior high and high school students investigated the association between age, acculturation, cultural adjustment difficulties, and general mental health concerns. Analyses determined that age, acculturation, and cultural adjustment difficulties had significant predictive effects on mental…

  18. Large-Scale functional network overlap is a general property of brain functional organization: Reconciling inconsistent fMRI findings from general-linear-model-based analyses

    PubMed Central

    Xu, Jiansong; Potenza, Marc N.; Calhoun, Vince D.; Zhang, Rubin; Yip, Sarah W.; Wall, John T.; Pearlson, Godfrey D.; Worhunsky, Patrick D.; Garrison, Kathleen A.; Moran, Joseph M.

    2016-01-01

    Functional magnetic resonance imaging (fMRI) studies regularly use univariate general-linear-model-based analyses (GLM). Their findings are often inconsistent across different studies, perhaps because of several fundamental brain properties including functional heterogeneity, balanced excitation and inhibition (E/I), and sparseness of neuronal activities. These properties stipulate heterogeneous neuronal activities in the same voxels and likely limit the sensitivity and specificity of GLM. This paper selectively reviews findings of histological and electrophysiological studies and fMRI spatial independent component analysis (sICA) and reports new findings by applying sICA to two existing datasets. The extant and new findings consistently demonstrate several novel features of brain functional organization not revealed by GLM. They include overlap of large-scale functional networks (FNs) and their concurrent opposite modulations, and no significant modulations in activity of most FNs across the whole brain during any task conditions. These novel features of brain functional organization are highly consistent with the brain’s properties of functional heterogeneity, balanced E/I, and sparseness of neuronal activity, and may help reconcile inconsistent GLM findings. PMID:27592153

  19. Surgery for left ventricular aneurysm: early and late survival after simple linear repair and endoventricular patch plasty.

    PubMed

    Lundblad, Runar; Abdelnoor, Michel; Svennevig, Jan Ludvig

    2004-09-01

    Simple linear resection and endoventricular patch plasty are alternative techniques to repair postinfarction left ventricular aneurysm. The aim of the study was to compare these 2 methods with regard to early mortality and long-term survival. We retrospectively reviewed 159 patients undergoing operations between 1989 and 2003. The epidemiologic design was of an exposed (simple linear repair, n = 74) versus nonexposed (endoventricular patch plasty, n = 85) cohort with 2 endpoints: early mortality and long-term survival. The crude effect of aneurysm repair technique versus endpoint was estimated by odds ratio, rate ratio, or relative risk and their 95% confidence intervals. Stratification analysis by using the Mantel-Haenszel method was done to quantify confounders and pinpoint effect modifiers. Adjustment for multiconfounders was performed by using logistic regression and Cox regression analysis. Survival curves were analyzed with the Breslow test and the log-rank test. Early mortality was 8.2% for all patients, 13.5% after linear repair and 3.5% after endoventricular patch plasty. When adjusted for multiconfounders, the risk of early mortality was significantly higher after simple linear repair than after endoventricular patch plasty (odds ratio, 4.4; 95% confidence interval, 1.1-17.8). Mean follow-up was 5.8 +/- 3.8 years (range, 0-14.0 years). Overall 5-year cumulative survival was 78%, 70.1% after linear repair and 91.4% after endoventricular patch plasty. The risk of total mortality was significantly higher after linear repair than after endoventricular patch plasty when controlled for multiconfounders (relative risk, 4.5; 95% confidence interval, 2.0-9.7). Linear repair dominated early in the series and patch plasty dominated later, giving a possible learning-curve bias in favor of patch plasty that could not be adjusted for in the regression analysis. Postinfarction left ventricular aneurysm can be repaired with satisfactory early and late results. Surgical

  20. 26 CFR 1.56-0 - Table of contents to § 1.56-1, adjustment for book income of corporations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... book income of corporations. 1.56-0 Section 1.56-0 Internal Revenue INTERNAL REVENUE SERVICE..., adjustment for book income of corporations. (a) Computation of the book income adjustment. (1) In general. (2) Taxpayers subject to the book income adjustment. (3) Consolidated returns. (4) Examples. (b) Adjusted net...

  1. Wave Response during Hydrostatic and Geostrophic Adjustment. Part I: Transient Dynamics.

    NASA Astrophysics Data System (ADS)

    Chagnon, Jeffrey M.; Bannon, Peter R.

    2005-05-01

    The adjustment of a compressible, stably stratified atmosphere to sources of hydrostatic and geostrophic imbalance is investigated using a linear model. Imbalance is produced by prescribed, time-dependent injections of mass, heat, or momentum that model those processes considered “external” to the scales of motion on which the linearization and other model assumptions are justifiable. Solutions are demonstrated in response to a localized warming characteristic of small isolated clouds, larger thunderstorms, and convective systems.For a semi-infinite atmosphere, solutions consist of a set of vertical modes of continuously varying wavenumber, each of which contains time dependencies classified as steady, acoustic wave, and buoyancy wave contributions. Additionally, a rigid lower-boundary condition implies the existence of a discrete mode—the Lamb mode— containing only a steady and acoustic wave contribution. The forced solutions are generalized in terms of a temporal Green's function, which represents the response to an instantaneous injection.The response to an instantaneous warming with geometry representative of a small, isolated cloud takes place in two stages. Within the first few minutes, acoustic and Lamb waves accomplish an expansion of the heated region. Within the first quarter-hour, nonhydrostatic buoyancy waves accomplish an upward displacement inside of the heated region with inflow below, outflow above, and weak subsidence on the periphery—all mainly accomplished by the lowest vertical wavenumber modes, which have the largest horizontal group speed. More complicated transient patterns of inflow aloft and outflow along the lower boundary are accomplished by higher vertical wavenumber modes. Among these is an outwardly propagating rotor along the lower boundary that effectively displaces the low-level inflow upward and outward.A warming of 20 min duration with geometry representative of a large thunderstorm generates only a weak acoustic

  2. Statistical Methods for Quality Control of Steel Coils Manufacturing Process using Generalized Linear Models

    NASA Astrophysics Data System (ADS)

    García-Díaz, J. Carlos

    2009-11-01

    Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.

  3. Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States

    PubMed Central

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam

    2010-01-01

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500

  4. Quality measurement in the shunt treatment of hydrocephalus: analysis and risk adjustment of the Revision Quotient.

    PubMed

    Piatt, Joseph H; Freibott, Christina E

    2014-07-01

    OBJECT.: The Revision Quotient (RQ) has been defined as the ratio of the number of CSF shunt revisions to the number of new shunt insertions for a particular neurosurgical practice in a unit of time. The RQ has been proposed as a quality measure in the treatment of childhood hydrocephalus. The authors examined the construct validity of the RQ and explored the feasibility of risk stratification under this metric. The Kids' Inpatient Database for 1997, 2000, 2003, 2006, and 2009 was queried for admissions with diagnostic codes for hydrocephalus and procedural codes for CSF shunt insertion or revision. Revision quotients were calculated for hospitals that performed 12 or more shunt insertions annually. The univariate associations of hospital RQs with a variety of institutional descriptors were analyzed, and a generalized linear model of the RQ was constructed. There were 12,244 admissions (34%) during which new shunts were inserted, and there were 23,349 admissions (66%) for shunt revision. Three hundred thirty-four annual RQs were calculated for 152 different hospitals. Analysis of variance in hospital RQs over the 5 years of study data supports the construct validity of the metric. The following factors were incorporated into a generalized linear model that accounted for 41% of the variance of the measured RQs: degree of pediatric specialization, proportion of initial case mix in the infant age group, and proportion with neoplastic hydrocephalus. The RQ has construct validity. Risk adjustment is feasible, but the risk factors that were identified relate predominantly to patterns of patient flow through the health care system. Possible advantages of an alternative metric, the Surgical Activity Ratio, are discussed.

  5. Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.

  6. Linear transformation and oscillation criteria for Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Zheng, Zhaowen

    2007-08-01

    Using a linear transformation similar to the Kummer transformation, some new oscillation criteria for linear Hamiltonian systems are established. These results generalize and improve the oscillation criteria due to I.S. Kumari and S. Umanaheswaram [I. Sowjaya Kumari, S. Umanaheswaram, Oscillation criteria for linear matrix Hamiltonian systems, J. Differential Equations 165 (2000) 174-198], Q. Yang et al. [Q. Yang, R. Mathsen, S. Zhu, Oscillation theorems for self-adjoint matrix Hamiltonian systems, J. Differential Equations 190 (2003) 306-329], and S. Chen and Z. Zheng [Shaozhu Chen, Zhaowen Zheng, Oscillation criteria of Yan type for linear Hamiltonian systems, Comput. Math. Appl. 46 (2003) 855-862]. These criteria also unify many of known criteria in literature and simplify the proofs.

  7. Perfect commuting-operator strategies for linear system games

    NASA Astrophysics Data System (ADS)

    Cleve, Richard; Liu, Li; Slofstra, William

    2017-01-01

    Linear system games are a generalization of Mermin's magic square game introduced by Cleve and Mittal. They show that perfect strategies for linear system games in the tensor-product model of entanglement correspond to finite-dimensional operator solutions of a certain set of non-commutative equations. We investigate linear system games in the commuting-operator model of entanglement, where Alice and Bob's measurement operators act on a joint Hilbert space, and Alice's operators must commute with Bob's operators. We show that perfect strategies in this model correspond to possibly infinite-dimensional operator solutions of the non-commutative equations. The proof is based around a finitely presented group associated with the linear system which arises from the non-commutative equations.

  8. 26 CFR 1.1016-5 - Miscellaneous adjustments to basis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 11 2011-04-01 2011-04-01 false Miscellaneous adjustments to basis. 1.1016-5 Section 1.1016-5 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Basis Rules of General Application § 1.1016-5 Miscellaneous...

  9. Gender identity and adjustment in black, Hispanic, and white preadolescents.

    PubMed

    Corby, Brooke C; Hodges, Ernest V E; Perry, David G

    2007-01-01

    The generality of S. K. Egan and D. G. Perry's (2001) model of gender identity and adjustment was evaluated by examining associations between gender identity (felt gender typicality, felt gender contentedness, and felt pressure for gender conformity) and social adjustment in 863 White, Black, and Hispanic 5th graders (mean age = 11.1 years). Relations between gender identity and adjustment varied across ethnic/racial groups, indicating that S. K. Egan and D. G. Perry's model requires amendment. It is suggested that the implications of gender identity for adjustment depend on the particular meanings that a child attaches to gender (e.g., the specific attributes the child regards as desirable for each sex); these meanings may vary across and within ethnic/racial groups. Cross-ethnic/racial investigation can aid theory building by pointing to constructs that are neglected in research with a single ethnic/racial group but that are crucial components of basic developmental processes. Copyright 2006 APA, all rights reserved.

  10. Comparison of dynamical approximation schemes for non-linear gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.

    1994-01-01

    We have recently conducted a controlled comparison of a number of approximations for gravitational clustering against the same n-body simulations. These include ordinary linear perturbation theory (Eulerian), the adhesion approximation, the frozen-flow approximation, the Zel'dovich approximation (describable as first-order Lagrangian perturbation theory), and its second-order generalization. In the last two cases we also created new versions of approximation by truncation, i.e., smoothing the initial conditions by various smoothing window shapes and varying their sizes. The primary tool for comparing simulations to approximation schemes was crosscorrelation of the evolved mass density fields, testing the extent to which mass was moved to the right place. The Zel'dovich approximation, with initial convolution with a Gaussian e(exp -k(exp 2)/k(exp 2, sub G)) where k(sub G) is adjusted to be just into the nonlinear regime of the evolved model (details in text) worked extremely well. Its second-order generalization worked slightly better. All other schemes, including those proposed as generalizations of the Zel'dovich approximation created by adding forces, were in fact generally worse by this measure. By explicitly checking, we verified that the success of our best-choice was a result of the best treatment of the phases of nonlinear Fourier components. Of all schemes tested, the adhesion approximation produced the most accurate nonlinear power spectrum and density distribution, but its phase errors suggest mass condensations were moved to slightly the wrong location. Due to its better reproduction of the mass density distribution function and power spectrum, it might be preferred for some uses. We recommend either n-body simulations or our modified versions of the Zel'dovich approximation, depending upon the purpose. The theoretical implication is that pancaking is implicit in all cosmological gravitational clustering, at least from Gaussian initial conditions, even

  11. Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel

    PubMed Central

    Tian, Lili; Bao, Hong; Wang, Meng; Duan, Xuechao

    2016-01-01

    With the aim of developing multiple input and multiple output (MIMO) coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR) controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified. PMID:27706076

  12. 49 CFR 393.53 - Automatic brake adjusters and brake adjustment indicators.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... indicators. 393.53 Section 393.53 Transportation Other Regulations Relating to Transportation (Continued... brake adjustment indicators. (a) Automatic brake adjusters (hydraulic brake systems). Each commercial... vehicle at the time it was manufactured. (c) Brake adjustment indicator (air brake systems). On each...

  13. 49 CFR 393.53 - Automatic brake adjusters and brake adjustment indicators.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... indicators. 393.53 Section 393.53 Transportation Other Regulations Relating to Transportation (Continued... brake adjustment indicators. (a) Automatic brake adjusters (hydraulic brake systems). Each commercial... vehicle at the time it was manufactured. (c) Brake adjustment indicator (air brake systems). On each...

  14. A linear accelerator for simulated micrometeors.

    NASA Technical Reports Server (NTRS)

    Slattery, J. C.; Becker, D. G.; Hamermesh, B.; Roy, N. L.

    1973-01-01

    Review of the theory, design parameters, and construction details of a linear accelerator designed to impart meteoric velocities to charged microparticles in the 1- to 10-micron diameter range. The described linac is of the Sloan Lawrence type and, in a significant departure from conventional accelerator practice, is adapted to single particle operation by employing a square wave driving voltage with the frequency automatically adjusted from 12.5 to 125 kHz according to the variable velocity of each injected particle. Any output velocity up to about 30 km/sec can easily be selected, with a repetition rate of approximately two particles per minute.

  15. Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.

    ERIC Educational Resources Information Center

    Alexopoulos, John; Abraham, Paul

    2001-01-01

    Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…

  16. 29 CFR 301.6 - General.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 2 2014-07-01 2014-07-01 false General. 301.6 Section 301.6 Labor Regulations Relating to Labor NATIONAL RAILROAD ADJUSTMENT BOARD RULES OF PROCEDURE § 301.6 General. (a) To conserve time and...) All submissions shall be typewritten or machine prepared, addressed to the Secretary of the...

  17. 29 CFR 301.6 - General.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 2 2013-07-01 2013-07-01 false General. 301.6 Section 301.6 Labor Regulations Relating to Labor NATIONAL RAILROAD ADJUSTMENT BOARD RULES OF PROCEDURE § 301.6 General. (a) To conserve time and...) All submissions shall be typewritten or machine prepared, addressed to the Secretary of the...

  18. 29 CFR 301.6 - General.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 2 2012-07-01 2012-07-01 false General. 301.6 Section 301.6 Labor Regulations Relating to Labor NATIONAL RAILROAD ADJUSTMENT BOARD RULES OF PROCEDURE § 301.6 General. (a) To conserve time and...) All submissions shall be typewritten or machine prepared, addressed to the Secretary of the...

  19. 29 CFR 301.6 - General.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 2 2010-07-01 2010-07-01 false General. 301.6 Section 301.6 Labor Regulations Relating to Labor NATIONAL RAILROAD ADJUSTMENT BOARD RULES OF PROCEDURE § 301.6 General. (a) To conserve time and...) All submissions shall be typewritten or machine prepared, addressed to the Secretary of the...

  20. 29 CFR 301.6 - General.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 2 2011-07-01 2011-07-01 false General. 301.6 Section 301.6 Labor Regulations Relating to Labor NATIONAL RAILROAD ADJUSTMENT BOARD RULES OF PROCEDURE § 301.6 General. (a) To conserve time and...) All submissions shall be typewritten or machine prepared, addressed to the Secretary of the...

  1. Academic adjustment across middle school: the role of public regard and parenting.

    PubMed

    McGill, Rebecca Kang; Hughes, Diane; Alicea, Stacey; Way, Niobe

    2012-07-01

    In the current longitudinal study, we examined associations between Black and Latino youths' perceptions of the public's opinion of their racial/ethnic group (i.e., public regard) and changes in academic adjustment outcomes across middle school. We also tested combinations of racial/ethnic socialization and parent involvement in academic activities as moderators of this association. We used a 2nd-order latent trajectory model to test changes in academic adjustment outcomes in a sample of 345 Black and Latino urban youth across 6th, 7th, and 8th grades (51% female). Results revealed a significant average linear decline in academic adjustment from 6th to 8th grade, as well as significant variation around this decline. We found that parenting moderated the association between public regard and the latent trajectory of academic adjustment. Specifically, for youth who reported high racial/ethnic socialization and low parent academic involvement, lower public regard predicted lower academic adjustment in 6th grade. For youth who reported both low racial/ethnic socialization and low parent academic involvement, lower public regard predicted a steeper decline in academic adjustment over time. Finally, among youth who reported high racial/ethnic socialization and high parent academic involvement, public regard was not associated with either the intercept or the slope of academic adjustment. Thus, the combination of high racial/ethnic socialization and parent academic involvement may protect youths' academic motivation and performance from the negative effects of believing the public has low opinions of one's racial/ethnic group. Implications for protecting Black and Latino youths' academic outcomes from decline during middle school are discussed.

  2. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere

    PubMed Central

    Rossi, Sergio; Anfodillo, Tommaso; Čufar, Katarina; Cuny, Henri E.; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gričar, Jožica; Gruber, Andreas; King, Gregory M.; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B. K.

    2013-01-01

    Background and Aims Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Methods Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1–9 years per site from 1998 to 2011. Key Results The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern Conclusions The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond

  3. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere.

    PubMed

    Rossi, Sergio; Anfodillo, Tommaso; Cufar, Katarina; Cuny, Henri E; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gricar, Jozica; Gruber, Andreas; King, Gregory M; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B K

    2013-12-01

    Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1-9 years per site from 1998 to 2011. The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern. The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions.

  4. Just another club? The distinctiveness of the relation between religious service attendance and adolescent psychosocial adjustment.

    PubMed

    Good, Marie; Willoughby, Teena; Fritjers, Jan

    2009-10-01

    This study used hierarchical linear modeling to compare longitudinal patterns of adolescent religious service attendance and club attendance, and to contrast the longitudinal relations between adolescent adjustment and religious service versus club attendance. Participants included 1050 students (47% girls) encompassing a school district in Canada, who completed the survey first in grade nine and again in grades 11 and 12. Results demonstrated that patterns of religious service attendance over time were quite different from other clubs. Religious attendance was uniquely associated with several indicators of positive as well as negative adjustment. Club involvement, conversely, was only associated with positive adjustment--particularly for individuals who reported sustained involvement over time. Findings suggest that religious services may provide some unique experiences--both positive and negative--over and above what may be provided in other clubs, and that sustained, rather than sporadic participation in clubs, may be especially important for adolescent adjustment.

  5. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  6. Three estimates of the association between linear growth failure and cognitive ability.

    PubMed

    Cheung, Y B; Lam, K F

    2009-09-01

    To compare three estimators of association between growth stunting as measured by height-for-age Z-score and cognitive ability in children, and to examine the extent statistical adjustment for covariates is useful for removing confounding due to socio-economic status. Three estimators, namely random-effects, within- and between-cluster estimators, for panel data were used to estimate the association in a survey of 1105 pairs of siblings who were assessed for anthropometry and cognition. Furthermore, a 'combined' model was formulated to simultaneously provide the within- and between-cluster estimates. Random-effects and between-cluster estimators showed strong association between linear growth and cognitive ability, even after adjustment for a range of socio-economic variables. In contrast, the within-cluster estimator showed a much more modest association: For every increase of one Z-score in linear growth, cognitive ability increased by about 0.08 standard deviation (P < 0.001). The combined model verified that the between-cluster estimate was significantly larger than the within-cluster estimate (P = 0.004). Residual confounding by socio-economic situations may explain a substantial proportion of the observed association between linear growth and cognition in studies that attempt to control the confounding by means of multivariable regression analysis. The within-cluster estimator provides more convincing and modest results about the strength of association.

  7. Estimating organ doses from tube current modulated CT examinations using a generalized linear model.

    PubMed

    Bostani, Maryam; McMillan, Kyle; Lu, Peiyun; Kim, Grace Hyun J; Cody, Dianna; Arbique, Gary; Greenberg, S Bruce; DeMarco, John J; Cagnon, Chris H; McNitt-Gray, Michael F

    2017-04-01

    Currently, available Computed Tomography dose metrics are mostly based on fixed tube current Monte Carlo (MC) simulations and/or physical measurements such as the size specific dose estimate (SSDE). In addition to not being able to account for Tube Current Modulation (TCM), these dose metrics do not represent actual patient dose. The purpose of this study was to generate and evaluate a dose estimation model based on the Generalized Linear Model (GLM), which extends the ability to estimate organ dose from tube current modulated examinations by incorporating regional descriptors of patient size, scanner output, and other scan-specific variables as needed. The collection of a total of 332 patient CT scans at four different institutions was approved by each institution's IRB and used to generate and test organ dose estimation models. The patient population consisted of pediatric and adult patients and included thoracic and abdomen/pelvis scans. The scans were performed on three different CT scanner systems. Manual segmentation of organs, depending on the examined anatomy, was performed on each patient's image series. In addition to the collected images, detailed TCM data were collected for all patients scanned on Siemens CT scanners, while for all GE and Toshiba patients, data representing z-axis-only TCM, extracted from the DICOM header of the images, were used for TCM simulations. A validated MC dosimetry package was used to perform detailed simulation of CT examinations on all 332 patient models to estimate dose to each segmented organ (lungs, breasts, liver, spleen, and kidneys), denoted as reference organ dose values. Approximately 60% of the data were used to train a dose estimation model, while the remaining 40% was used to evaluate performance. Two different methodologies were explored using GLM to generate a dose estimation model: (a) using the conventional exponential relationship between normalized organ dose and size with regional water equivalent diameter

  8. Tribal vs. Public Schools: Perceived Discrimination and School Adjustment among Indigenous Children from Early to Mid-Adolescence.

    PubMed

    Crawford, Devan M; Cheadle, Jacob E; Whitbeck, Les B

    2010-04-01

    The purpose of this study is to assess the differential effects of perceived discrimination by type of school on positive school adjustment among Indigenous children during late elementary and early middle school years. The analysis utilizes a sample of 654 Indigenous children from four reservations in the Northern Midwest and four Canadian First Nation reserves. Multiple group linear growth modeling within a structural equation framework is employed to investigate the moderating effects of school type on the relationship between discrimination and positive school adjustment. Results show that students in all school types score relatively high on positive school adjustment at time one (ages 10-12). However, in contrast to students in tribal schools for whom positive school adjustment remains stable, those attending public schools and those moving between school types show a decline in school adjustment over time. Furthermore, the negative effects of discrimination on positive school adjustment are greater for those attending public schools and those moving between schools. Possible reasons for this finding and potential explanations for why tribal schools may provide protection from the negative effects of discrimination are discussed.

  9. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers.

    PubMed

    Treuer, H; Hoevels, M; Luyken, K; Gierich, A; Kocher, M; Müller, R P; Sturm, V

    2000-08-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.

  10. Contract and ownership type of general practices and patient experience in England: multilevel analysis of a national cross-sectional survey.

    PubMed

    Cowling, Thomas E; Laverty, Anthony A; Harris, Matthew J; Watt, Hilary C; Greaves, Felix; Majeed, Azeem

    2017-11-01

    Objective To examine associations between the contract and ownership type of general practices and patient experience in England. Design Multilevel linear regression analysis of a national cross-sectional patient survey (General Practice Patient Survey). Setting All general practices in England in 2013-2014 ( n = 8017). Participants 903,357 survey respondents aged 18 years or over and registered with a general practice for six months or more (34.3% of 2,631,209 questionnaires sent). Main outcome measures Patient reports of experience across five measures: frequency of consulting a preferred doctor; ability to get a convenient appointment; rating of doctor communication skills; ease of contacting the practice by telephone; and overall experience (measured on four- or five-level interval scales from 0 to 100). Models adjusted for demographic and socioeconomic characteristics of respondents and general practice populations and a random intercept for each general practice. Results Most practices had a centrally negotiated contract with the UK government ('General Medical Services' 54.6%; 4337/7949). Few practices were limited companies with locally negotiated 'Alternative Provider Medical Services' contracts (1.2%; 98/7949); these practices provided worse overall experiences than General Medical Services practices (adjusted mean difference -3.04, 95% CI -4.15 to -1.94). Associations were consistent in direction across outcomes and largest in magnitude for frequency of consulting a preferred doctor (-12.78, 95% CI -15.17 to -10.39). Results were similar for practices owned by large organisations (defined as having ≥20 practices) which were uncommon (2.2%; 176/7949). Conclusions Patients registered to general practices owned by limited companies, including large organisations, reported worse experiences of their care than other patients in 2013-2014.

  11. Amplitudes for multiphoton quantum processes in linear optics

    NASA Astrophysics Data System (ADS)

    Urías, Jesús

    2011-07-01

    The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.

  12. Parent illness appraisals, parent adjustment, and parent-reported child quality of life in pediatric cancer.

    PubMed

    Mullins, Larry L; Cushing, Christopher C; Suorsa, Kristina I; Tackett, Alayna P; Molzon, Elizabeth S; Mayes, Sunnye; McNall-Knapp, Rene; Mullins, Alexandria J; Gamwell, Kaitlyn L; Chaney, John M

    2016-08-01

    Psychosocial distress is a salient construct experienced by families of children with newly diagnosed cancer, but little is known about parental appraisal of the child's illness and the subsequent impact this may have on child and parent functioning. The goal of the present study was to examine the interrelationships among multiple parent illness appraisals, parent adjustment outcomes, and parent-reported child quality of life in parents of children diagnosed with cancer. Parents completed measures of illness appraisal (illness uncertainty and attitude toward illness), parent adjustment (general distress, posttraumatic stress, parenting stress), and child quality of life (general and cancer-related). Path analysis revealed direct effects for parent illness uncertainty and illness attitudes on all 3 measures of parent adjustment. Illness uncertainty, but not illness attitudes, demonstrated a direct effect on parent-reported child general quality of life; parenting stress had direct effects on general and cancer-related quality of life. Exploratory analyses indicated that parent illness uncertainty and illness attitudes conferred indirect effects on parent-reported general and cancer-related quality of life through parenting stress. Negative parent illness appraisals appear to have adverse impacts on parents' psychosocial functioning and have implications for the well-being of their child with cancer.

  13. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  14. Risk adjustment alternatives in paying for behavioral health care under Medicaid.

    PubMed Central

    Ettner, S L; Frank, R G; McGuire, T G; Hermann, R C

    2001-01-01

    OBJECTIVE: To compare the performance of various risk adjustment models in behavioral health applications such as setting mental health and substance abuse (MH/SA) capitation payments or overall capitation payments for populations including MH/SA users. DATA SOURCES/STUDY DESIGN: The 1991-93 administrative data from the Michigan Medicaid program were used. We compared mean absolute prediction error for several risk adjustment models and simulated the profits and losses that behavioral health care carve outs and integrated health plans would experience under risk adjustment if they enrolled beneficiaries with a history of MH/SA problems. Models included basic demographic adjustment, Adjusted Diagnostic Groups, Hierarchical Condition Categories, and specifications designed for behavioral health. PRINCIPAL FINDINGS: Differences in predictive ability among risk adjustment models were small and generally insignificant. Specifications based on relatively few MH/SA diagnostic categories did as well as or better than models controlling for additional variables such as medical diagnoses at predicting MH/SA expenditures among adults. Simulation analyses revealed that among both adults and minors considerable scope remained for behavioral health care carve outs to make profits or losses after risk adjustment based on differential enrollment of severely ill patients. Similarly, integrated health plans have strong financial incentives to avoid MH/SA users even after adjustment. CONCLUSIONS: Current risk adjustment methodologies do not eliminate the financial incentives for integrated health plans and behavioral health care carve-out plans to avoid high-utilizing patients with psychiatric disorders. PMID:11508640

  15. 7 CFR 1580.101 - General statement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false General statement. 1580.101 Section 1580.101 Agriculture Regulations of the Department of Agriculture (Continued) FOREIGN AGRICULTURAL SERVICE, DEPARTMENT OF AGRICULTURE TRADE ADJUSTMENT ASSISTANCE FOR FARMERS § 1580.101 General statement. This part...

  16. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    PubMed Central

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS

  17. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    PubMed

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we

  18. Stratification for the propensity score compared with linear regression techniques to assess the effect of treatment or exposure.

    PubMed

    Senn, Stephen; Graf, Erika; Caputo, Angelika

    2007-12-30

    Stratifying and matching by the propensity score are increasingly popular approaches to deal with confounding in medical studies investigating effects of a treatment or exposure. A more traditional alternative technique is the direct adjustment for confounding in regression models. This paper discusses fundamental differences between the two approaches, with a focus on linear regression and propensity score stratification, and identifies points to be considered for an adequate comparison. The treatment estimators are examined for unbiasedness and efficiency. This is illustrated in an application to real data and supplemented by an investigation on properties of the estimators for a range of underlying linear models. We demonstrate that in specific circumstances the propensity score estimator is identical to the effect estimated from a full linear model, even if it is built on coarser covariate strata than the linear model. As a consequence the coarsening property of the propensity score-adjustment for a one-dimensional confounder instead of a high-dimensional covariate-may be viewed as a way to implement a pre-specified, richly parametrized linear model. We conclude that the propensity score estimator inherits the potential for overfitting and that care should be taken to restrict covariates to those relevant for outcome. Copyright (c) 2007 John Wiley & Sons, Ltd.

  19. Interpersonal ambivalence, perceived relationship adjustment, and conjugal loss.

    PubMed

    Bonanno, G A; Notarius, C I; Gunzerath, L; Keltner, D; Horowitz, M J

    1998-12-01

    Ambivalence is widely assumed to prolong grief. To examine this hypothesis, the authors developed a measure of ambivalence based on an algorithmic combination of separate positive and negative evaluations of one's spouse. Preliminary construct validity was evidenced in relation to emotional difficulties and to facial expressions of emotion. Bereaved participants, relative to a nonbereaved comparison sample, recollected their relationships as better adjusted but were more ambivalent. Ambivalence about spouses was generally associated with increased distress and poorer perceived health but did not predict long-term grief outcome once initial outcome was controlled. In contrast, initial grief and distress predicted increased ambivalence and decreased Dyadic Adjustment Scale scores at 14 months postloss, regardless of initial scores on these measures. Limitations and implications of the findings are discussed.

  20. On the solution of the generalized wave and generalized sine-Gordon equations

    NASA Technical Reports Server (NTRS)

    Ablowitz, M. J.; Beals, R.; Tenenblat, K.

    1986-01-01

    The generalized wave equation and generalized sine-Gordon equations are known to be natural multidimensional differential geometric generalizations of the classical two-dimensional versions. In this paper, a system of linear differential equations is associated with these equations, and it is shown how the direct and inverse problems can be solved for appropriately decaying data on suitable lines. An initial-boundary value problem is solved for these equations.

  1. Systems of Inhomogeneous Linear Equations

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.

  2. Uranium Associations with Kidney Outcomes Vary by Urine Concentration Adjustment Method

    PubMed Central

    Shelley, Rebecca; Kim, Nam-Soo; Parsons, Patrick J.; Lee, Byung-Kook; Agnew, Jacqueline; Jaar, Bernard G.; Steuerwald, Amy J.; Matanoski, Genevieve; Fadrowski, Jeffrey; Schwartz, Brian S.; Todd, Andrew C.; Simon, David; Weaver, Virginia M.

    2017-01-01

    Uranium is a ubiquitous metal that is nephrotoxic at high doses. Few epidemiologic studies have examined the kidney filtration impact of chronic environmental exposure. In 684 lead workers environmentally exposed to uranium, multiple linear regression was used to examine associations of uranium measured in a four-hour urine collection with measured creatinine clearance, serum creatinine- and cystatin-C-based estimated glomerular filtration rates, and N-acetyl-β-D-glucosaminidase (NAG). Three methods were utilized, in separate models, to adjust uranium levels for urine concentration - μg uranium/g creatinine; μg uranium/L and urine creatinine as separate covariates; and μg uranium/4 hr. Median urine uranium levels were 0.07 μg/g creatinine and 0.02 μg/4 hr and were highly correlated (rs =0.95). After adjustment, higher ln-urine uranium was associated with lower measured creatinine clearance and higher NAG in models that used urine creatinine to adjust for urine concentration but not in models that used total uranium excreted (μg/4 hr). These results suggest that, in some instances, associations between urine toxicants and kidney outcomes may be statistical, due to the use of urine creatinine in both exposure and outcome metrics, rather than nephrotoxic. These findings support consideration of non-creatinine-based methods of adjustment for urine concentration in nephrotoxicant research. PMID:23591699

  3. Longitudinal associations between sibling relationship quality, parental differential treatment, and children's adjustment.

    PubMed

    Richmond, Melissa K; Stocker, Clare M; Rienks, Shauna L

    2005-12-01

    This study examined associations between changes in sibling relationships and changes in parental differential treatment and corresponding changes in children's adjustment. One hundred thirty-three families were assessed at 3 time points. Parents rated children's externalizing problems, and children reported on sibling relationship quality, parental differential treatment, and depressive symptoms. On average, older siblings were 10, 12, and 16 years old, and younger siblings were 8, 10, and 14 years old at Waves 1, 2, and 3, respectively. Results from hierarchical linear modeling indicated that as sibling relationships improved over time, children's depressive symptoms decreased over time. In addition, as children were less favored over their siblings over time, children's externalizing problems increased over time. Findings highlight the developmental interplay between the sibling context and children's adjustment. Copyright 2006 APA, all rights reserved).

  4. Magnetically adjustable intraocular lens.

    PubMed

    Matthews, Michael Wayne; Eggleston, Harry Conrad; Pekarek, Steven D; Hilmas, Greg Eugene

    2003-11-01

    To provide a noninvasive, magnetic adjustment mechanism to the repeatedly and reversibly adjustable, variable-focus intraocular lens (IOL). University of Missouri-Rolla, Rolla, and Eggleston Adjustable Lens, St. Louis, Missouri, USA. Mechanically adjustable IOLs have been fabricated and tested. Samarium and cobalt rare-earth magnets have been incorporated into the poly(methyl methacrylate) (PMMA) optic of these adjustable lenses. The stability of samarium and cobalt in the PMMA matrix was examined with leaching studies. Operational force testing of the magnetic optics with emphasis on the rotational forces of adjustment was done. Prototype optics incorporating rare-earth magnetic inserts were consistently produced. After 32 days in solution, samarium and cobalt concentration reached a maximum of 5 ppm. Operational force measurements indicate that successful adjustments of this lens can be made using external magnetic fields with rotational torques in excess of 0.6 ounce inch produced. Actual lenses were remotely adjusted using magnetic fields. The magnetically adjustable version of this IOL is a viable and promising means of handling the common issues of postoperative refractive errors without the requirement of additional surgery. The repeatedly adjustable mechanism of this lens also holds promise for the developing eyes of pediatric patients and the changing needs of all patients.

  5. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  6. The Effect of Common Rearing on Adolescent Adjustment: Evidence from a U.S. Adoption Cohort.

    ERIC Educational Resources Information Center

    McGue, Matt; And Others

    1996-01-01

    Examined the influence of environmental factors on adolescent adjustment in a sample of 667 adoptive families. Found that correlations between parental ratings of family functioning and offspring ratings of psychological adjustment were generally higher for the birth than the adoptive offspring sample, and that the correlation in the adjustment…

  7. Parenting and adolescents' psychological adjustment: Longitudinal moderation by adolescents' genetic sensitivity.

    PubMed

    Stocker, Clare M; Masarik, April S; Widaman, Keith F; Reeb, Ben T; Boardman, Jason D; Smolen, Andrew; Neppl, Tricia K; Conger, Katherine J

    2017-10-01

    We examined whether adolescents' genetic sensitivity, measured by a polygenic index score, moderated the longitudinal associations between parenting and adolescents' psychological adjustment. The sample included 323 mothers, fathers, and adolescents (177 female, 146 male; Time 1 [T1] average age = 12.61 years, SD = 0.54 years; Time 2 [T2] average age = 13.59 years, SD = 0.59 years). Parents' warmth and hostility were rated by trained, independent observers using videotapes of family discussions. Adolescents reported their symptoms of anxiety, depressed mood, and hostility at T1 and T2. The results from autoregressive linear regression models showed that adolescents' genetic sensitivity moderated associations between observations of both mothers' and fathers' T1 parenting and adolescents' T2 composite maladjustment, depression, anxiety, and hostility. Compared to adolescents with low genetic sensitivity, adolescents with high genetic sensitivity had worse adjustment outcomes when parenting was low on warmth and high on hostility. When parenting was characterized by high warmth and low hostility, adolescents with high genetic sensitivity had better adjustment outcomes than their counterparts with low genetic sensitivity. The results support the differential susceptibility model and highlight the complex ways that genes and environment interact to influence development.

  8. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

    PubMed

    Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

    2017-09-01

    The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

  9. Humidity and Gravimetric Equivalency Adjustments for Nephelometer-Based Particulate Matter Measurements of Emissions from Solid Biomass Fuel Use in Cookstoves

    PubMed Central

    Soneja, Sutyajeet; Chen, Chen; Tielsch, James M.; Katz, Joanne; Zeger, Scott L.; Checkley, William; Curriero, Frank C.; Breysse, Patrick N.

    2014-01-01

    Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward. PMID:24950062

  10. Humidity and gravimetric equivalency adjustments for nephelometer-based particulate matter measurements of emissions from solid biomass fuel use in cookstoves.

    PubMed

    Soneja, Sutyajeet; Chen, Chen; Tielsch, James M; Katz, Joanne; Zeger, Scott L; Checkley, William; Curriero, Frank C; Breysse, Patrick N

    2014-06-19

    Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward.

  11. Role of Osmotic Adjustment in Plant Productivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gebre, G.M.

    2001-01-11

    poplar clones (P. trichocurpa Torr. & Gray x P: deltoides Bartr., TD and P. deltoides x P. nigra L., DN), we determined the TD clone, which was more productive during the first three years, had slightly lower osmotic potential than the DN clone, and also indicated a small osmotic adjustment compared with the DN hybrid. However, the productivity differences were negligible by the fifth growing season. In a separate study with several P. deltoides clones, we did not observe a consistent relationship between growth and osmotic adjustment. Some clones that had low osmotic potential and osmotic adjustment were as productive as another clone that had high osmotic potential. The least productive clone also had low osmotic potential and osmotic adjustment. The absence of a correlation may have been partly due to the fact that all clones were capable of osmotic adjustment and had low osmotic potential. In a study involving an inbred three-generation TD F{sub 2} pedigree (family 331), we did not observe a correlation between relative growth rate and osmotic potential or osmotic adjustment. However, when clones that exhibited osmotic adjustment were analyzed, there was a negative correlation between growth and osmotic potential, indicating clones with lower osmotic potential were more productive. This was observed only in clones that were exposed to drought. Although the absolute osmotic potential varied by growing environment, the relative ranking among progenies remains generally the same, suggesting that osmotic potential is genetically controlled. We have identified a quantitative trait locus for osmotic potential in another three-generation TD F{sub 2} pedigree (family 822). Unlike the many studies in agricultural crops, most of the forest tree studies were not based on plants exposed to severe stress to determine the role of osmotic adjustment. Future studies should consider using clones that are known to be productive but have contrasting osmotic adjustment capability as

  12. New Design for an Adjustable Cise Space Maintainer

    PubMed Central

    2018-01-01

    Objective The aim of this study is to present a new adjustable Cise space maintainer for preventive orthodontic applications. Methods Stainless steel based new design consists of six main components. In order to understand the major displacement and stress fields, structural analysis for the design is considered by using finite element method. Results Similar to major displacement at y-axis, critical stresses σx and τxy possess a linear distribution with constant increasing. Additionally, strain energy density (SED) plays an important role to determine critical biting load capacity. Conclusion Structural analysis shows that the space maintainer is stable and is used for maintaining and/or regaining the space which arouses early loss of molar tooth. PMID:29854764

  13. Outer Solutions for General Linear Turning Point Problems.

    DTIC Science & Technology

    1977-02-01

    i l t e r e n t i a l equat ions near a pole wi th respect to a parameter . For general inves t iga t ions such d i f f e ren t i a l equat...analyt ic funct ions Ar (X) are allowed to have poles at x = 0. This the ory can easily be extended to slightly more involved types of s ingular i t...4) means that the order of the poles of A (x) can grow , at worst , l inearly with r. This restraining inequal i ty is the stronger the larger K i s

  14. Population decoding of motor cortical activity using a generalized linear model with hidden states.

    PubMed

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam

    2010-06-15

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  15. Engineering multiphoton states for linear optics computation

    NASA Astrophysics Data System (ADS)

    Aniello, P.; Lupo, C.; Napolitano, M.; Paris, M. G. A.

    2007-03-01

    Transformations achievable by linear optical components allow to generate the whole unitary group only when restricted to the one-photon subspace of a multimode Fock space. In this paper, we address the more general problem of encoding quantum information by multiphoton states, and elaborating it via ancillary extensions, linear optical passive devices and photodetection. Our scheme stems in a natural way from the mathematical structures underlying the physics of linear optical passive devices. In particular, we analyze an economical procedure for mapping a fiducial 2-photon 2-mode state into an arbitrary 2-photon 2-mode state using ancillary resources and linear optical passive N-ports assisted by post-selection. We found that adding a single ancilla mode is enough to generate any desired target state. The effect of imperfect photodetection in post-selection is considered and a simple trade-off between success probability and fidelity is derived.

  16. An improved artifact removal in exposure fusion with local linear constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Hai; Yu, Mali

    2018-04-01

    In exposure fusion, it is challenging to remove artifacts because of camera motion and moving objects in the scene. An improved artifact removal method is proposed in this paper, which performs local linear adjustment in artifact removal progress. After determining a reference image, we first perform high-dynamic-range (HDR) deghosting to generate an intermediate image stack from the input image stack. Then, a linear Intensity Mapping Function (IMF) in each window is extracted based on the intensities of intermediate image and reference image, the intensity mean and variance of reference image. Finally, with the extracted local linear constraints, we reconstruct a target image stack, which can be directly used for fusing a single HDR-like image. Some experiments have been implemented and experimental results demonstrate that the proposed method is robust and effective in removing artifacts especially in the saturated regions of the reference image.

  17. Feeling Good, Feeling Bad: Influences of Maternal Perceptions of the Child and Marital Adjustment on Well-Being in Mothers of Children with an Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Lickenbrock, Diane M.; Ekas, Naomi V.; Whitman, Thomas L.

    2011-01-01

    Mothers of children with an autism spectrum disorder (n = 49) participated in a 30-day diary study which examined associations between mothers' positive and negative perceptions of their children, marital adjustment, and maternal well-being. Hierarchical linear modeling results revealed that marital adjustment mediated associations between…

  18. A prospective study of adjustment to hemodialysis.

    PubMed

    Lev, E L; Owen, S V

    1998-10-01

    To examine (a) changes in subjects' self-care self-efficacy over time and (b) the relationship of subjects' self-care self-efficacy with adjustment to hemodialysis. A longitudinal design was used to study changes in self-care self-efficacy and associations between self-care self-efficacy and measures of adjustment: health status, mood distress, symptom distress, dialysis stress, and perceived adherence to fluid restriction. Subjects were recruited from 8 settings in the Northeast where outpatient hemodialysis treatment was administered. Sixty-four subjects were recruited to the study. Twenty-eight subjects completed 3 occasions of data collection. Data were collected on three occasions: (a) baseline-within 100 days of beginning treatment; (b) 4 months after beginning treatment; and (c) 8 months after beginning treatment. Eta-squared, a measure of practical significance, is reported for four factors of the self-care self-efficacy measure on each of the three occasions. Associations between self-care self-efficacy and measures of adjustment were examined by means of Pearson correlations. Eta-squared estimates showed generally positive changes occurring over time in subjects' self-care self-efficacy, health status, mood distress, symptom distress, dialysis stress, and perceived adherence to fluid restriction. Changes were more positive at 4-months than at 8-months after enrollment. Significant correlations (p < .05) occurred between self-care self-efficacy and mood states, health status, symptom distress, and perceived adherence to fluid restrictions. Correlations occurred more frequently between self-care self-efficacy and mood states than between self-care self-efficacy and other measures of adjustment. The study provided pilot data suggesting that hemodialysis patients' self-care self-efficacy and measures of adjustment change over time. Patients who had increased confidence in self-care strategies (self-efficacy) were associated with having more positive mood

  19. A Longitudinal Study of Perceived Family Adjustment and Emotional Adjustment in Early Adolescence.

    ERIC Educational Resources Information Center

    Ohannessian, Christine McCauley; And Others

    1994-01-01

    Examined the predictive relationship between family adjustment and emotional adjustment during early adolescence and the influence of adolescents' levels of self-worth, peer support, and coping abilities. Found that family adjustment and emotional adjustment are reciprocally related and that high levels of self-worth, peer support, and coping…

  20. Adolescent RSA Responses during an Anger Discussion Task: Relations to Emotion Regulation and Adjustment

    PubMed Central

    Cui, Lixian; Morris, Amanda Sheffield; Harrist, Amanda W.; Larzelere, Robert E.; Criss, Michael M.; Houltberg, Benjamin J.

    2015-01-01

    The current study examined associations between adolescent respiratory sinus arrhythmia (RSA) during an angry event discussion task and adolescents’ emotion regulation and adjustment. Data were collected from 206 adolescents (10–18 years old, M age = 13.37). Electrocardiogram (ECG) and respiration data were collected from adolescents, and RSA values and respiration rates were computed. Adolescents reported on their own emotion regulation, prosocial behavior, and aggressive behavior. Multi-level latent growth modeling was employed to capture RSA responses across time (i.e., linear and quadratic changes; time course approach), and adolescent emotion regulation and adjustment variables were included in the model to test their links to RSA responses. Results indicated that high RSA baseline was associated with more adolescent prosocial behavior. A pattern of initial RSA decreases (RSA suppression) in response to angry event recall and subsequent RSA increases (RSA rebound) were related to better anger and sadness regulation and more prosocial behavior. However, RSA was not significantly linked to adolescent aggressive behavior. We also compared the time course approach with the conventional linear approach and found that the time course approach provided more meaningful and rich information. The implications of adaptive RSA change patterns are discussed. PMID:25642723

  1. Generalized concurrence in boson sampling.

    PubMed

    Chin, Seungbeom; Huh, Joonsuk

    2018-04-17

    A fundamental question in linear optical quantum computing is to understand the origin of the quantum supremacy in the physical system. It is found that the multimode linear optical transition amplitudes are calculated through the permanents of transition operator matrices, which is a hard problem for classical simulations (boson sampling problem). We can understand this problem by considering a quantum measure that directly determines the runtime for computing the transition amplitudes. In this paper, we suggest a quantum measure named "Fock state concurrence sum" C S , which is the summation over all the members of "the generalized Fock state concurrence" (a measure analogous to the generalized concurrences of entanglement and coherence). By introducing generalized algorithms for computing the transition amplitudes of the Fock state boson sampling with an arbitrary number of photons per mode, we show that the minimal classical runtime for all the known algorithms directly depends on C S . Therefore, we can state that the Fock state concurrence sum C S behaves as a collective measure that controls the computational complexity of Fock state BS. We expect that our observation on the role of the Fock state concurrence in the generalized algorithm for permanents would provide a unified viewpoint to interpret the quantum computing power of linear optics.

  2. Quantum algorithm for linear regression

    NASA Astrophysics Data System (ADS)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  3. Aspects of general higher-order gravities

    NASA Astrophysics Data System (ADS)

    Bueno, Pablo; Cano, Pablo A.; Min, Vincent S.; Visser, Manus R.

    2017-02-01

    We study several aspects of higher-order gravities constructed from general contractions of the Riemann tensor and the metric in arbitrary dimensions. First, we use the fast-linearization procedure presented in [P. Bueno and P. A. Cano, arXiv:1607.06463] to obtain the equations satisfied by the metric perturbation modes on a maximally symmetric background in the presence of matter and to classify L (Riemann ) theories according to their spectrum. Then, we linearize all theories up to quartic order in curvature and use this result to construct quartic versions of Einsteinian cubic gravity. In addition, we show that the most general cubic gravity constructed in a dimension-independent way and which does not propagate the ghostlike spin-2 mode (but can propagate the scalar) is a linear combination of f (Lovelock ) invariants, plus the Einsteinian cubic gravity term, plus a new ghost-free gravity term. Next, we construct the generalized Newton potential and the post-Newtonian parameter γ for general L (Riemann ) gravities in arbitrary dimensions, unveiling some interesting differences with respect to the four-dimensional case. We also study the emission and propagation of gravitational radiation from sources for these theories in four dimensions, providing a generalized formula for the power emitted. Finally, we review Wald's formalism for general L (Riemann ) theories and construct new explicit expressions for the relevant quantities involved. Many examples illustrate our calculations.

  4. Variable selection for marginal longitudinal generalized linear models.

    PubMed

    Cantoni, Eva; Flemming, Joanna Mills; Ronchetti, Elvezio

    2005-06-01

    Variable selection is an essential part of any statistical analysis and yet has been somewhat neglected in the context of longitudinal data analysis. In this article, we propose a generalized version of Mallows's C(p) (GC(p)) suitable for use with both parametric and nonparametric models. GC(p) provides an estimate of a measure of model's adequacy for prediction. We examine its performance with popular marginal longitudinal models (fitted using GEE) and contrast results with what is typically done in practice: variable selection based on Wald-type or score-type tests. An application to real data further demonstrates the merits of our approach while at the same time emphasizing some important robust features inherent to GC(p).

  5. Linear analysis of auto-organization in Hebbian neural networks.

    PubMed

    Carlos Letelier, J; Mpodozis, J

    1995-01-01

    The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.

  6. Double Arm Linkage precision Linear motion (DALL) Carriage, a simplified, rugged, high performance linear motion stage for the moving mirror of an Fourier Transform Spectrometer or other system requiring precision linear motion

    NASA Astrophysics Data System (ADS)

    Johnson, Kendall B.; Hopkins, Greg

    2017-08-01

    The Double Arm Linkage precision Linear motion (DALL) carriage has been developed as a simplified, rugged, high performance linear motion stage. Initially conceived as a moving mirror stage for the moving mirror of a Fourier Transform Spectrometer (FTS), it is applicable to any system requiring high performance linear motion. It is based on rigid double arm linkages connecting a base to a moving carriage through flexures. It is a monolithic design. The system is fabricated from one piece of material including the flexural elements, using high precision machining. The monolithic design has many advantages. There are no joints to slip or creep and there are no CTE (coefficient of thermal expansion) issues. This provides a stable, robust design, both mechanically and thermally and is expected to provide a wide operating temperature range, including cryogenic temperatures, and high tolerance to vibration and shock. Furthermore, it provides simplicity and ease of implementation, as there is no assembly or alignment of the mechanism. It comes out of the machining operation aligned and there are no adjustments. A prototype has been fabricated and tested, showing superb shear performance and very promising tilt performance. This makes it applicable to both corner cube and flat mirror FTS systems respectively.

  7. SAS macro programs for geographically weighted generalized linear modeling with spatial point data: applications to health research.

    PubMed

    Chen, Vivian Yi-Ju; Yang, Tse-Chuan

    2012-08-01

    An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    PubMed

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  9. Phylogenetic mixtures and linear invariants for equal input models.

    PubMed

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  10. Analysis of multivariate longitudinal kidney function outcomes using generalized linear mixed models.

    PubMed

    Jaffa, Miran A; Gebregziabher, Mulugeta; Jaffa, Ayad A

    2015-06-14

    Renal transplant patients are mandated to have continuous assessment of their kidney function over time to monitor disease progression determined by changes in blood urea nitrogen (BUN), serum creatinine (Cr), and estimated glomerular filtration rate (eGFR). Multivariate analysis of these outcomes that aims at identifying the differential factors that affect disease progression is of great clinical significance. Thus our study aims at demonstrating the application of different joint modeling approaches with random coefficients on a cohort of renal transplant patients and presenting a comparison of their performance through a pseudo-simulation study. The objective of this comparison is to identify the model with best performance and to determine whether accuracy compensates for complexity in the different multivariate joint models. We propose a novel application of multivariate Generalized Linear Mixed Models (mGLMM) to analyze multiple longitudinal kidney function outcomes collected over 3 years on a cohort of 110 renal transplantation patients. The correlated outcomes BUN, Cr, and eGFR and the effect of various covariates such patient's gender, age and race on these markers was determined holistically using different mGLMMs. The performance of the various mGLMMs that encompass shared random intercept (SHRI), shared random intercept and slope (SHRIS), separate random intercept (SPRI) and separate random intercept and slope (SPRIS) was assessed to identify the one that has the best fit and most accurate estimates. A bootstrap pseudo-simulation study was conducted to gauge the tradeoff between the complexity and accuracy of the models. Accuracy was determined using two measures; the mean of the differences between the estimates of the bootstrapped datasets and the true beta obtained from the application of each model on the renal dataset, and the mean of the square of these differences. The results showed that SPRI provided most accurate estimates and did not exhibit

  11. Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation

    NASA Astrophysics Data System (ADS)

    Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.

    A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.

  12. Linear and Non-linear Information Flows In Rainfall Field

    NASA Astrophysics Data System (ADS)

    Molini, A.; La Barbera, P.; Lanza, L. G.

    The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from

  13. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  14. 26 CFR 1.56-0 - Table of contents to § 1.56-1, adjustment for book income of corporations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 1 2011-04-01 2009-04-01 true Table of contents to § 1.56-1, adjustment for...) Definitions. (i) Historic practice. (ii) Accounting literature. (3) Adjustments for certain taxes. (i) In... the accounting literature. (ii) Equity adjustments. (A) In general. (B) Definition of equity...

  15. A Study on the Relationships among Surface Variables to Adjust the Height of Surface Temperature for Data Assimilation.

    NASA Astrophysics Data System (ADS)

    Kang, J. H.; Song, H. J.; Han, H. J.; Ha, J. H.

    2016-12-01

    The observation processing system, KPOP (KIAPS - Korea Institute of Atmospheric Prediction Systems - Package for Observation Processing) have developed to provide optimal observations to the data assimilation system for the KIAPS Integrated Model (KIM). Currently, the KPOP has capable of processing almost all of observations for the KMA (Korea Meteorological Administration) operational global data assimilation system. The height adjustment of SURFACE observations are essential for the quality control due to the difference in height between observation station and model topography. For the SURFACE observation, it is usual to adjust the height using lapse rate or hypsometric equation, which decides values mainly depending on the difference of height. We have a question of whether the height can be properly adjusted following to the linear or exponential relationship solely with regard to the difference of height, with disregard the atmospheric conditions. In this study, firstly we analyse the change of surface variables such as temperature (T2m), pressure (Psfc), humidity (RH2m and Q2m), and wind components (U and V) according to the height difference. Additionally, we look further into the relationships among surface variables . The difference of pressure shows a strong linear relationship with difference of height. But the difference of temperature according to the height shows a significant correlation with difference of relative humidity than with the height difference. A development of reliable model for the height-adjustment of surface temperature is being undertaken based on the preliminary results.

  16. Accounting for time- and space-varying changes in the gravity field to improve the network adjustment of relative-gravity data

    USGS Publications Warehouse

    Kennedy, Jeffrey R.; Ferre, Ty P.A.

    2015-01-01

    The relative gravimeter is the primary terrestrial instrument for measuring spatially and temporally varying gravitational fields. The background noise of the instrument—that is, non-linear drift and random tares—typically requires some form of least-squares network adjustment to integrate data collected during a campaign that may take several days to weeks. Here, we present an approach to remove the change in the observed relative-gravity differences caused by hydrologic or other transient processes during a single campaign, so that the adjusted gravity values can be referenced to a single epoch. The conceptual approach is an example of coupled hydrogeophysical inversion, by which a hydrologic model is used to inform and constrain the geophysical forward model. The hydrologic model simulates the spatial variation of the rate of change of gravity as either a linear function of distance from an infiltration source, or using a 3-D numerical groundwater model. The linear function can be included in and solved for as part of the network adjustment. Alternatively, the groundwater model is used to predict the change of gravity at each station through time, from which the accumulated gravity change is calculated and removed from the data prior to the network adjustment. Data from a field experiment conducted at an artificial-recharge facility are used to verify our approach. Maximum gravity change due to hydrology (observed using a superconducting gravimeter) during the relative-gravity field campaigns was up to 2.6 μGal d−1, each campaign was between 4 and 6 d and one month elapsed between campaigns. The maximum absolute difference in the estimated gravity change between two campaigns, two months apart, using the standard network adjustment method and the new approach, was 5.5 μGal. The maximum gravity change between the same two campaigns was 148 μGal, and spatial variation in gravity change revealed zones of preferential infiltration and areas of relatively

  17. Linear Least Squares for Correlated Data

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1988-01-01

    Throughout the literature authors have consistently discussed the suspicion that regression results were less than satisfactory when the independent variables were correlated. Camm, Gulledge, and Womer, and Womer and Marcotte provide excellent applied examples of these concerns. Many authors have obtained partial solutions for this problem as discussed by Womer and Marcotte and Wonnacott and Wonnacott, which result in generalized least squares algorithms to solve restrictive cases. This paper presents a simple but relatively general multivariate method for obtaining linear least squares coefficients which are free of the statistical distortion created by correlated independent variables.

  18. A general solution for the registration of optical multispectral scanners

    NASA Technical Reports Server (NTRS)

    Rader, M. L.

    1974-01-01

    The paper documents a general theory for registration (mapping) of data sets gathered by optical scanners such as the ERTS satellite MSS and the Skylab S-192 MSS. This solution is generally applicable to scanners which have rotating optics. Navigation data and ground control points are used in a statistically weighted adjustment based on a mathematical model of the dynamics of the spacecraft and the scanner system. This adjustment is very similar to the well known photogrammetric adjustments used in aerial mapping. Actual tests have been completed on NASA aircraft 24 channel MSS data, and the results are very encouraging.

  19. Aging effect on step adjustments and stability control in visually perturbed gait initiation.

    PubMed

    Sun, Ruopeng; Cui, Chuyi; Shea, John B

    2017-10-01

    Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A Superstrong Adjustable Permanent Magnet for the Final Focus Quadrupole in a Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mihara, T.

    A super strong permanent magnet quadrupole (PMQ) was fabricated and tested. It has an integrated strength of 28.5T with overall length of 10 cm and a 7mm bore radius. The final focus quadrupole of a linear collider needs a variable focal length. This can be obtained by slicing the magnet into pieces along the beamline direction and rotating these slices. But this technique may lead to movement of the magnetic center and introduction of a skew quadrupole component when the strength is varied. A ''double ring structure'' can ease these effects. A second prototype PMQ, containing thermal compensation materials andmore » with a double ring structure, has been fabricated. Worm gear is selected as the mechanical rotating scheme because the double ring structure needs a large torque to rotate magnets. The structure of the second prototype PMQ is shown.« less