Linearly Adjustable International Portfolios
NASA Astrophysics Data System (ADS)
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-09-01
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
General linear chirplet transform
NASA Astrophysics Data System (ADS)
Yu, Gang; Zhou, Yiqi
2016-03-01
Time-frequency (TF) analysis (TFA) method is an effective tool to characterize the time-varying feature of a signal, which has drawn many attentions in a fairly long period. With the development of TFA, many advanced methods are proposed, which can provide more precise TF results. However, some restrictions are introduced inevitably. In this paper, we introduce a novel TFA method, termed as general linear chirplet transform (GLCT), which can overcome some limitations existed in current TFA methods. In numerical and experimental validations, by comparing with current TFA methods, some advantages of GLCT are demonstrated, which consist of well-characterizing the signal of multi-component with distinct non-linear features, being independent to the mathematical model and initial TFA method, allowing for the reconstruction of the interested component, and being non-sensitivity to noise.
Generalized Linear Covariance Analysis
NASA Astrophysics Data System (ADS)
Markley, F. Landis; Carpenter, J. Russell
2009-01-01
This paper presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into "solve-for" and "consider" parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2008-01-01
We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Quantization of general linear electrodynamics
Rivera, Sergio; Schuller, Frederic P.
2011-03-15
General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.
Generalized adjustment by least squares ( GALS).
Elassal, A.A.
1983-01-01
The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author
A mechanism for precise linear and angular adjustment utilizing flexures
NASA Technical Reports Server (NTRS)
Ellis, J. R.
1986-01-01
The design and development of a mechanism for precise linear and angular adjustment is described. This work was in support of the development of a mechanical extensometer for biaxial strain measurement. A compact mechanism was required which would allow angular adjustments about perpendicular axes with better than 0.001 degree resolution. The approach adopted was first to develop a means of precise linear adjustment. To this end, a mechanism based on the toggle principle was built with inexpensive and easily manufactured parts. A detailed evaluation showed that the resolution of the mechanism was better than 1 micron and that adjustments made by using the device were repeatable. In the second stage of this work, the linear adjustment mechanisms were used in conjunction with a simple arrangement of flexural pivots and attachment blocks to provide the required angular adjustments. Attempts to use the mechanism in conjunction with the biaxial extensometer under development proved unsuccessful. Any form of in stitu adjustment was found to cause erratic changes in instrument output. These changes were due to problems with the suspension system. However, the subject mechanism performed well in its own right and appeared to have potential for use in other applications.
Linear equality constraints in the general linear mixed model.
Edwards, L J; Stewart, P W; Muller, K E; Helms, R W
2001-12-01
Scientists may wish to analyze correlated outcome data with constraints among the responses. For example, piecewise linear regression in a longitudinal data analysis can require use of a general linear mixed model combined with linear parameter constraints. Although well developed for standard univariate models, there are no general results that allow a data analyst to specify a mixed model equation in conjunction with a set of constraints on the parameters. We resolve the difficulty by precisely describing conditions that allow specifying linear parameter constraints that insure the validity of estimates and tests in a general linear mixed model. The recommended approach requires only straightforward and noniterative calculations to implement. We illustrate the convenience and advantages of the methods with a comparison of cognitive developmental patterns in a study of individuals from infancy to early adulthood for children from low-income families.
Generalized Linear Models in Family Studies
ERIC Educational Resources Information Center
Wu, Zheng
2005-01-01
Generalized linear models (GLMs), as defined by J. A. Nelder and R. W. M. Wedderburn (1972), unify a class of regression models for categorical, discrete, and continuous response variables. As an extension of classical linear models, GLMs provide a common body of theory and methodology for some seemingly unrelated models and procedures, such as…
Adjustable permanent quadrupoles for the next linear collider
James T. Volk et al.
2001-06-22
The proposed Next Linear Collider (NLC) will require over 1400 adjustable quadrupoles between the main linacs' accelerator structures. These 12.7 mm bore quadrupoles will have a range of integrated strength from 0.6 to 138 Tesla, with a maximum gradient of 141 Tesla per meter, an adjustment range of +0 to {minus}20% and effective lengths from 324 mm to 972 mm. The magnetic center must remain stable to within 1 micron during the 20% adjustment. In an effort to reduce costs and increase reliability, several designs using hybrid permanent magnets have been developed. Four different prototypes have been built. All magnets have iron poles and use Samarium Cobalt to provide the magnetic fields. Two use rotating permanent magnetic material to vary the gradient, one uses a sliding shunt to vary the gradient and the fourth uses counter rotating magnets. Preliminary data on gradient strength, temperature stability, and magnetic center position stability are presented. These data are compared to an equivalent electromagnetic prototype.
Extended Generalized Linear Latent and Mixed Model
ERIC Educational Resources Information Center
Segawa, Eisuke; Emery, Sherry; Curry, Susan J.
2008-01-01
The generalized linear latent and mixed modeling (GLLAMM framework) includes many models such as hierarchical and structural equation models. However, GLLAMM cannot currently accommodate some models because it does not allow some parameters to be random. GLLAMM is extended to overcome the limitation by adding a submodel that specifies a…
Identification of general linear mechanical systems
NASA Technical Reports Server (NTRS)
Sirlin, S. W.; Longman, R. W.; Juang, J. N.
1983-01-01
Previous work in identification theory has been concerned with the general first order time derivative form. Linear mechanical systems, a large and important class, naturally have a second order form. This paper utilizes this additional structural information for the purpose of identification. A realization is obtained from input-output data, and then knowledge of the system input, output, and inertia matrices is used to determine a set of linear equations whereby we identify the remaining unknown system matrices. Necessary and sufficient conditions on the number, type and placement of sensors and actuators are given which guarantee identificability, and less stringent conditions are given which guarantee generic identifiability. Both a priori identifiability and a posteriori identifiability are considered, i.e., identifiability being insured prior to obtaining data, and identifiability being assured with a given data set.
Permutation inference for the general linear model
Winkler, Anderson M.; Ridgway, Gerard R.; Webster, Matthew A.; Smith, Stephen M.; Nichols, Thomas E.
2014-01-01
Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (glms) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on glm parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm – the “randomise” algorithm – for permutation inference with the glm. PMID:24530839
GENERALIZED PARTIALLY LINEAR MIXED-EFFECTS MODELS INCORPORATING MISMEASURED COVARIATES
Liang, Hua
2009-01-01
In this article we consider a semiparametric generalized mixed-effects model, and propose combining local linear regression, and penalized quasilikelihood and local quasilikelihood techniques to estimate both population and individual parameters and nonparametric curves. The proposed estimators take into account the local correlation structure of the longitudinal data. We establish normality for the estimators of the parameter and asymptotic expansion for the estimators of the nonparametric part. For practical implementation, we propose an appropriate algorithm. We also consider the measurement error problem in covariates in our model, and suggest a strategy for adjusting the effects of measurement errors. We apply the proposed models and methods to study the relation between virologic and immunologic responses in AIDS clinical trials, in which virologic response is classified into binary variables. A dataset from an AIDS clinical study is analyzed. PMID:20160899
Irreducible Characters of General Linear Superalgebra and Super Duality
NASA Astrophysics Data System (ADS)
Cheng, Shun-Jen; Lam, Ngau
2010-09-01
We develop a new method to solve the irreducible character problem for a wide class of modules over the general linear superalgebra, including all the finite-dimensional modules, by directly relating the problem to the classical Kazhdan-Lusztig theory. Furthermore, we prove that certain parabolic BGG categories over the general linear algebra and over the general linear superalgebra are equivalent. We also verify a parabolic version of a conjecture of Brundan on the irreducible characters in the BGG category of the general linear superalgebra.
ERIC Educational Resources Information Center
Cheong, Yuk Fai; Kamata, Akihito
2013-01-01
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
Linear stability of general magnetically insulated electron flow
NASA Astrophysics Data System (ADS)
Swegle, J. A.; Mendel, C. W., Jr.; Seidel, D. B.; Quintenz, J. P.
1984-03-01
A linear stability theory for magnetically insulated systems was formulated by linearizing the general 3-D, time dependent theory of Mendel, Seidel, and Slut. It is found that, case of electron trajectories which are nearly laminar, with only small transverse motion, several suggestive simplifications occur in the eigenvalue equations.
Linear stability of general magnetically insulated electron flow
Swegle, J.A.; Mendel, C.W. Jr.; Seidel, D.B.; Quintenz, J.P.
1984-01-01
We have formulated a linear stability theory for magnetically insulated systems by linearizing the general 3-D, time-dependent theory of Mendel, Seidel, and Slutz. In the physically interesting case of electron trajectories which are nearly laminar, with only small transverse motion, we have found that several suggestive simplifications occur in the eigenvalue equations.
The General Linear Model and Direct Standardization: A Comparison.
ERIC Educational Resources Information Center
Little, Roderick J. A.; Pullum, Thomas W.
1979-01-01
Two methods of analyzing nonorthogonal (uneven cell sizes) cross-classified data sets are compared. The methods are direct standardization and the general linear model. The authors illustrate when direct standardization may be a desirable method of analysis. (JKS)
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.
From linear to generalized linear mixed models: A case study in repeated measures
Technology Transfer Automated Retrieval System (TEKTRAN)
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
Adjustable Permanent Quadrupoles Using Rotating Magnet Material Rods for the Next Linear Collider
James T Volk et al.
2001-09-24
The proposed Next Linear Collider (NLC) will require over 1400 adjustable quadrupoles between the main linacs' accelerator structures. These 12.7 mm bore quadrupoles will have a range of integrated strength from 0.6 to 132 Tesla, with a maximum gradient of 135 Tesla per meter, an adjustment range of +0-20% and effective lengths from 324 mm to 972 mm. The magnetic center must remain stable to within 1 micrometer during the 20% adjustment. In an effort to reduce estimated costs and increase reliability, several designs using hybrid permanent magnets have been developed. All magnets have iron poles and use either Samarium Cobalt or Neodymium Iron to provide the magnetic fields. Two prototypes use rotating rods containing permanent magnetic material to vary the gradient. Gradient changes of 20% and center shifts of less than 20 microns have been measured. These data are compared to an equivalent electromagnet prototype.
Linear equations in general purpose codes for stiff ODEs
Shampine, L. F.
1980-02-01
It is noted that it is possible to improve significantly the handling of linear problems in a general-purpose code with very little trouble to the user or change to the code. In such situations analytical evaluation of the Jacobian is a lot cheaper than numerical differencing. A slight change in the point at which the Jacobian is evaluated results in a more accurate Jacobian in linear problems. (RWR)
Optimal explicit strong-stability-preserving general linear methods.
Constantinescu, E.; Sandu, A.
2010-07-01
This paper constructs strong-stability-preserving general linear time-stepping methods that are well suited for hyperbolic PDEs discretized by the method of lines. These methods generalize both Runge-Kutta (RK) and linear multistep schemes. They have high stage orders and hence are less susceptible than RK methods to order reduction from source terms or nonhomogeneous boundary conditions. A global optimization strategy is used to find the most efficient schemes that have low storage requirements. Numerical results illustrate the theoretical findings.
A review of some extensions to generalized linear models.
Lindsey, J K
Although generalized linear models are reasonably well known, they are not as widely used in medical statistics as might be appropriate, with the exception of logistic, log-linear, and some survival models. At the same time, the generalized linear modelling methodology is decidedly outdated in that more powerful methods, involving wider classes of distributions, non-linear regression, censoring and dependence among responses, are required. Limitations of the generalized linear modelling approach include the need for the iterated weighted least squares (IWLS) procedure for estimation and deviances for inferences; these restrict the class of models that can be used and do not allow direct comparisons among models from different distributions. Powerful non-linear optimization routines are now available and comparisons can more fruitfully be made using the complete likelihood function. The link function is an artefact, necessary for IWLS to function with linear models, but that disappears once the class is extended to truly non-linear models. Restricting comparisons of responses under different treatments to differences in means can be extremely misleading if the shape of the distribution is changing. This may involve changes in dispersion, or of other shape-related parameters such as the skewness in a stable distribution, with the treatments or covariates. Any exact likelihood function, defined as the probability of the observed data, takes into account the fact that all observable data are interval censored, thus directly encompassing the various types of censoring possible with duration-type data. In most situations this can now be as easily used as the traditional approximate likelihood based on densities. Finally, methods are required for incorporating dependencies among responses in models including conditioning on previous history and on random effects. One important procedure for constructing such likelihoods is based on Kalman filtering. PMID:10474135
Beam envelope calculations in general linear coupled lattices
Chung, Moses; Qin, Hong; Groening, Lars; Xiao, Chen; Davidson, Ronald C.
2015-01-15
The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.
Beam envelope calculations in general linear coupled lattices
NASA Astrophysics Data System (ADS)
Chung, Moses; Qin, Hong; Groening, Lars; Davidson, Ronald C.; Xiao, Chen
2015-01-01
The envelope equations and Twiss parameters (β and α) provide important bases for uncoupled linear beam dynamics. For sophisticated beam manipulations, however, coupling elements between two transverse planes are intentionally introduced. The recently developed generalized Courant-Snyder theory offers an effective way of describing the linear beam dynamics in such coupled systems with a remarkably similar mathematical structure to the original Courant-Snyder theory. In this work, we present numerical solutions to the symmetrized matrix envelope equation for β which removes the gauge freedom in the matrix envelope equation for w. Furthermore, we construct the transfer and beam matrices in terms of the generalized Twiss parameters, which enables calculation of the beam envelopes in arbitrary linear coupled systems.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Canonical Correlation Analysis as the General Linear Model.
ERIC Educational Resources Information Center
Vidal, Sherry
The concept of the general linear model (GLM) is illustrated and how canonical correlation analysis is the GLM is explained, using a heuristic data set to demonstrate how canonical correlation analysis subsumes various multivariate and univariate methods. The paper shows how each of these analyses produces a synthetic variable, like the Yhat…
NASA Astrophysics Data System (ADS)
Tian, J. J.; Yao, Y.
2011-03-01
We report an experimental demonstration of muliwavelength erbium-doped fiber laser with adjustable wavelength number based on a power-symmetric nonlinear optical loop mirror (NOLM) in a linear cavity. The intensity-dependent loss (IDL) induced by the NOLM is used to suppress the mode competition and realize the stable multiwavelength oscillation. The controlling of the wavelength number is achieved by adjusting the strength of IDL, which is dependent on the pump power. As the pump power increases from 40 to 408 mW, 1-7 lasing line(s) at fixed wavelength around 1601 nm are obtained. The output power stability is also investigated. The most power fluctuation of single wavelength is less than 0.9 dB, when the wavelength number is increased from 1-7.
The generalized sidelobe canceller based on quaternion widely linear processing.
Tao, Jian-wu; Chang, Wen-xiu
2014-01-01
We investigate the problem of quaternion beamforming based on widely linear processing. First, a quaternion model of linear symmetric array with two-component electromagnetic (EM) vector sensors is presented. Based on array's quaternion model, we propose the general expression of a quaternion semiwidely linear (QSWL) beamformer. Unlike the complex widely linear beamformer, the QSWL beamformer is based on the simultaneous operation on the quaternion vector, which is composed of two jointly proper complex vectors, and its involution counterpart. Second, we propose a useful implementation of QSWL beamformer, that is, QSWL generalized sidelobe canceller (GSC), and derive the simple expressions of the weight vectors. The QSWL GSC consists of two-stage beamformers. By designing the weight vectors of two-stage beamformers, the interference is completely canceled in the output of QSWL GSC and the desired signal is not distorted. We derive the array's gain expression and analyze the performance of the QSWL GSC in the presence of one type of interference. The advantage of QSWL GSC is that the main beam can always point to the desired signal's direction and the robustness to DOA mismatch is improved. Finally, simulations are used to verify the performance of the proposed QSWL GSC. PMID:24955425
The Generalized Sidelobe Canceller Based on Quaternion Widely Linear Processing
Tao, Jian-wu; Chang, Wen-xiu
2014-01-01
We investigate the problem of quaternion beamforming based on widely linear processing. First, a quaternion model of linear symmetric array with two-component electromagnetic (EM) vector sensors is presented. Based on array's quaternion model, we propose the general expression of a quaternion semiwidely linear (QSWL) beamformer. Unlike the complex widely linear beamformer, the QSWL beamformer is based on the simultaneous operation on the quaternion vector, which is composed of two jointly proper complex vectors, and its involution counterpart. Second, we propose a useful implementation of QSWL beamformer, that is, QSWL generalized sidelobe canceller (GSC), and derive the simple expressions of the weight vectors. The QSWL GSC consists of two-stage beamformers. By designing the weight vectors of two-stage beamformers, the interference is completely canceled in the output of QSWL GSC and the desired signal is not distorted. We derive the array's gain expression and analyze the performance of the QSWL GSC in the presence of one type of interference. The advantage of QSWL GSC is that the main beam can always point to the desired signal's direction and the robustness to DOA mismatch is improved. Finally, simulations are used to verify the performance of the proposed QSWL GSC. PMID:24955425
Estimating classification images with generalized linear and additive models.
Knoblauch, Kenneth; Maloney, Laurence T
2008-12-22
Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.
Credibility analysis of risk classes by generalized linear model
NASA Astrophysics Data System (ADS)
Erdemir, Ovgucan Karadag; Sucu, Meral
2016-06-01
In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.
Generalization of continuous-variable quantum cloning with linear optics
NASA Astrophysics Data System (ADS)
Zhai, Zehui; Guo, Juan; Gao, Jiangrui
2006-05-01
We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.
Linear spin-2 fields in most general backgrounds
NASA Astrophysics Data System (ADS)
Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael
2016-04-01
We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.
NASA Astrophysics Data System (ADS)
Nordtvedt, K.
2015-11-01
A local system of bodies in General Relativity whose exterior metric field asymptotically approaches the Minkowski metric effaces any effects of the matter distribution exterior to its Minkowski boundary condition. To enforce to all orders this property of gravity which appears to hold in nature, a method using linear algebraic scaling equations is developed which generates by an iterative process an N-body Lagrangian expansion for gravity's motion-independent potentials which fulfills exterior effacement along with needed metric potential expansions. Then additional properties of gravity - interior effacement and Lorentz time dilation and spatial contraction - produce additional iterative, linear algebraic equations for obtaining the full non-linear and motion-dependent N-body gravity Lagrangian potentials as well.
Comparative Study of Algorithms for Automated Generalization of Linear Objects
NASA Astrophysics Data System (ADS)
Azimjon, S.; Gupta, P. K.; Sukhmani, R. S. G. S.
2014-11-01
Automated generalization, rooted from conventional cartography, has become an increasing concern in both geographic information system (GIS) and mapping fields. All geographic phenomenon and the processes are bound to the scale, as it is impossible for human being to observe the Earth and the processes in it without decreasing its scale. To get optimal results, cartographers and map-making agencies develop set of rules and constraints, however these rules are under consideration and topic for many researches up until recent days. Reducing map generating time and giving objectivity is possible by developing automated map generalization algorithms (McMaster and Shea, 1988). Modification of the scale traditionally is a manual process, which requires knowledge of the expert cartographer, and it depends on the experience of the user, which makes the process very subjective as every user may generate different map with same requirements. However, automating generalization based on the cartographic rules and constrains can give consistent result. Also, developing automated system for map generation is the demand of this rapid changing world. The research that we have conveyed considers only generalization of the roads, as it is one of the indispensable parts of a map. Dehradun city, Uttarakhand state of India was selected as a study area. The study carried out comparative study of the generalization software sets, operations and algorithms available currently, also considers advantages and drawbacks of the existing software used worldwide. Research concludes with the development of road network generalization tool and with the final generalized road map of the study area, which explores the use of open source python programming language and attempts to compare different road network generalization algorithms. Thus, the paper discusses the alternative solutions for automated generalization of linear objects using GIS-technologies. Research made on automated of road network
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D. Kühn, Oliver
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.
Elastic capsule deformation in general irrotational linear flows
Szatmary, Alex C.; Eggleton, Charles D.
2012-01-01
Knowledge of the response of elastic capsules to imposed fluid flow is necessary for predicting deformation and motion of biological cells and synthetic capsules in microfluidic devices and in the microcirculation. Capsules have been studied in shear, planar extensional, and axisymmetric extensional flows. Here, the flow gradient matrix of a general irrotational linear flow is characterized by two parameters, its strain rate, defined as the maximum of the principal strain rates, and by a new term, q, the difference in the two lesser principal strain rates, scaled by the maximum principal strain rate; this characterization is valid for ellipsoids in irrotational linear flow, and it gives good results for spheres in general linear flows at low capillary numbers. We demonstrate that deformable non-spherical particles align with the principal axes of an imposed irrotational flow. Thus, it is most practical to model deformation of non-spherical particles already aligned with the flow, rather than considering each arbitrary orientation. Capsule deformation was modeled for a sphere, a prolate spheroid, and an oblate spheroid, subjected to combinations of uniaxial, biaxial, and planar extensional flows; modeling was performed using the immersed boundary method. The time response of each capsule to each flow was found, as were the steady-state deformation factor, mean strain energy, and surface area. For a given capillary number, planar flows led to more deformation than uniaxial or biaxial extensional flows. Capsule behavior in all cases was bounded by the response of capsules to uniaxial, biaxial, and planar extensional flow. PMID:23426110
Generalization of continuous-variable quantum cloning with linear optics
Zhai Zehui; Guo Juan; Gao Jiangrui
2006-05-15
We propose an asymmetric quantum cloning scheme. Based on the proposal and experiment by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)], we generalize it to two asymmetric cases: quantum cloning with asymmetry between output clones and between quadrature variables. These optical implementations also employ linear elements and homodyne detection only. Finally, we also compare the utility of symmetric and asymmetric cloning in an analysis of a squeezed-state quantum key distribution protocol and find that the asymmetric one is more advantageous.
General linear mode conversion coefficient in one dimension
NASA Astrophysics Data System (ADS)
Littlejohn, Robert G.; Flynn, William G.
1993-03-01
A general formula is presented for the mode conversion coefficient for linear mode conversion in one dimension, in terms of an arbitrary 2 x 2 reduced dispersion matrix describing the coupling of the modes. The mode conversion coefficient has three invariance properties which are discussed, namely, invariance under scaling transformations, canonical transformations, and a certain kind of Lorentz transformation. Formulas for the S matrix of mode conversion are also presented. The example of the conversion of electromagnetic waves to electrostatic waves in the ionosphere is used to illustrate the formulas.
General linear mode conversion coefficient in one dimension
NASA Astrophysics Data System (ADS)
Littlejohn, Robert G.; Flynn, William G.
1993-03-01
A general formula is presented for the mode conversion coefficient for linear mode conversion in one dimension, in terms of an arbitrary 2×2 reduced dispersion matrix describing the coupling of the modes. The mode conversion coefficient has three invariance properties which are discussed, namely, invariance under scaling transformations, canonical transformations, and a certain kind of Lorentz transformation. Formulas for the S matrix of mode conversion are also presented. The example of the conversion of electromagnetic waves to electrostatic waves in the ionosphere is used to illustrate the formulas.
Generalized space and linear momentum operators in quantum mechanics
Costa, Bruno G. da
2014-06-15
We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p{sup ^}{sub q}, and its canonically conjugate deformed position operator x{sup ^}{sub q}. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.
Generalized space and linear momentum operators in quantum mechanics
NASA Astrophysics Data System (ADS)
da Costa, Bruno G.; Borges, Ernesto P.
2014-06-01
We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator hat{p}_q, and its canonically conjugate deformed position operator hat{x}_q. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.
The rotational feedback on linear-momentum balance in glacial isostatic adjustment
NASA Astrophysics Data System (ADS)
Martinec, Zdenek; Hagedoorn, Jan
2015-04-01
The influence of changes in surface ice-mass redistribution and associated viscoelastic response of the Earth, known as glacial-isostatic adjustment (GIA), on the Earth's rotational dynamics has long been known. Equally important is the effect of the changes in the rotational dynamics on the viscoelastic deformation of the Earth. This signal, known as the rotational feedback, or more precisely, the rotational feedback on the sea-level equation, has been mathematically described by the sea-level equation extended for the term that is proportional to perturbation in the centrifugal potential and the second-degree tidal Love number. The perturbation in the centrifugal force due to changes in the Earth's rotational dynamics enters not only into the sea-level equation, but also into the conservation law of linear momentum such that the internal viscoelastic force, the perturbation in the gravitational force and the perturbation in the centrifugal force are in balance. Adding the centrifugal-force perturbation to the linear-momentum balance creates an additional rotational feedback on the viscoelastic deformations of the Earth. We term this feedback mechanism as the rotational feedback on the linear-momentum balance. We extend both the time-domain method for modelling the GIA response of laterally heterogeneous earth models and the traditional Laplace-domain method for modelling the GIA-induced rotational response to surface loading by considering the rotational feedback on linear-momentum balance. The correctness of the mathematical extensions of the methods is validated numerically by comparing the polar motion response to the GIA process and the rotationally-induced degree 2 and order 1 spherical harmonic component of the surface vertical displacement and gravity field. We present the difference between the case where the rotational feedback on linear-momentum balance is considered against that where it is not. Numerical simulations show that the resulting difference
The rotational feedback on linear-momentum balance in glacial isostatic adjustment
NASA Astrophysics Data System (ADS)
Martinec, Zdeněk; Hagedoorn, Jan
2014-12-01
The influence of changes in surface ice-mass redistribution and associated viscoelastic response of the Earth, known as glacial isostatic adjustment (GIA), on the Earth's rotational dynamics has long been known. Equally important is the effect of the changes in the rotational dynamics on the viscoelastic deformation of the Earth. This signal, known as the rotational feedback, or more precisely, the rotational feedback on the sea level equation, has been mathematically described by the sea level equation extended for the term that is proportional to perturbation in the centrifugal potential and the second-degree tidal Love number. The perturbation in the centrifugal force due to changes in the Earth's rotational dynamics enters not only into the sea level equation, but also into the conservation law of linear momentum such that the internal viscoelastic force, the perturbation in the gravitational force and the perturbation in the centrifugal force are in balance. Adding the centrifugal-force perturbation to the linear-momentum balance creates an additional rotational feedback on the viscoelastic deformations of the Earth. We term this feedback mechanism, which is studied in this paper, as the rotational feedback on the linear-momentum balance. We extend both the time-domain method for modelling the GIA response of laterally heterogeneous earth models developed by Martinec and the traditional Laplace-domain method for modelling the GIA-induced rotational response to surface loading by considering the rotational feedback on linear-momentum balance. The correctness of the mathematical extensions of the methods is validated numerically by comparing the polar-motion response to the GIA process and the rotationally induced degree 2 and order 1 spherical harmonic component of the surface vertical displacement and gravity field. We present the difference between the case where the rotational feedback on linear-momentum balance is considered against that where it is not
General mirror pairs for gauged linear sigma models
NASA Astrophysics Data System (ADS)
Aspinwall, Paul S.; Plesser, M. Ronen
2015-11-01
We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.
NASA Astrophysics Data System (ADS)
Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.
2015-11-01
The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas
Diagnostic Measures for Generalized Linear Models with Missing Covariates
ZHU, HONGTU; IBRAHIM, JOSEPH G.; SHI, XIAOYAN
2009-01-01
In this paper, we carry out an in-depth investigation of diagnostic measures for assessing the influence of observations and model misspecification in the presence of missing covariate data for generalized linear models. Our diagnostic measures include case-deletion measures and conditional residuals. We use the conditional residuals to construct goodness-of-fit statistics for testing possible misspecifications in model assumptions, including the sampling distribution. We develop specific strategies for incorporating missing data into goodness-of-fit statistics in order to increase the power of detecting model misspecification. A resampling method is proposed to approximate the p-value of the goodness-of-fit statistics. Simulation studies are conducted to evaluate our methods and a real data set is analysed to illustrate the use of our various diagnostic measures. PMID:20037674
Optimization in generalized linear models: A case study
NASA Astrophysics Data System (ADS)
Silva, Eliana Costa e.; Correia, Aldina; Lopes, Isabel Cristina
2016-06-01
The maximum likelihood method is usually chosen to estimate the regression parameters of Generalized Linear Models (GLM) and also for hypothesis testing and goodness of fit tests. The classical method for estimating GLM parameters is the Fisher scores. In this work we propose to compute the estimates of the parameters with two alternative methods: a derivative-based optimization method, namely the BFGS method which is one of the most popular of the quasi-Newton algorithms, and the PSwarm derivative-free optimization method that combines features of a pattern search optimization method with a global Particle Swarm scheme. As a case study we use a dataset of biological parameters (phytoplankton) and chemical and environmental parameters of the water column of a Portuguese reservoir. The results show that, for this dataset, BFGS and PSwarm methods provided a better fit, than Fisher scores method, and can be good alternatives for finding the estimates for the parameters of a GLM.
Using parallel banded linear system solvers in generalized eigenvalue problems
NASA Technical Reports Server (NTRS)
Zhang, Hong; Moss, William F.
1994-01-01
Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speedup is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.
Using parallel banded linear system solvers in generalized eigenvalue problems
NASA Technical Reports Server (NTRS)
Zhang, Hong; Moss, William F.
1993-01-01
Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.
Adjusting for Health Status in Non-Linear Models of Health Care Disparities
Cook, Benjamin L.; McGuire, Thomas G.; Meara, Ellen; Zaslavsky, Alan M.
2009-01-01
This article compared conceptual and empirical strengths of alternative methods for estimating racial disparities using non-linear models of health care access. Three methods were presented (propensity score, rank and replace, and a combined method) that adjust for health status while allowing SES variables to mediate the relationship between race and access to care. Applying these methods to a nationally representative sample of blacks and non-Hispanic whites surveyed in the 2003 and 2004 Medical Expenditure Panel Surveys (MEPS), we assessed the concordance of each of these methods with the Institute of Medicine (IOM) definition of racial disparities, and empirically compared the methods' predicted disparity estimates, the variance of the estimates, and the sensitivity of the estimates to limitations of available data. The rank and replace and combined methods (but not the propensity score method) are concordant with the IOM definition of racial disparities in that each creates a comparison group with the appropriate marginal distributions of health status and SES variables. Predicted disparities and prediction variances were similar for the rank and replace and combined methods, but the rank and replace method was sensitive to limitations on SES information. For all methods, limiting health status information significantly reduced estimates of disparities compared to a more comprehensive dataset. We conclude that the two IOM-concordant methods were similar enough that either could be considered in disparity predictions. In datasets with limited SES information, the combined method is the better choice. PMID:20352070
Adjusting for Health Status in Non-Linear Models of Health Care Disparities.
Cook, Benjamin L; McGuire, Thomas G; Meara, Ellen; Zaslavsky, Alan M
2009-03-01
This article compared conceptual and empirical strengths of alternative methods for estimating racial disparities using non-linear models of health care access. Three methods were presented (propensity score, rank and replace, and a combined method) that adjust for health status while allowing SES variables to mediate the relationship between race and access to care. Applying these methods to a nationally representative sample of blacks and non-Hispanic whites surveyed in the 2003 and 2004 Medical Expenditure Panel Surveys (MEPS), we assessed the concordance of each of these methods with the Institute of Medicine (IOM) definition of racial disparities, and empirically compared the methods' predicted disparity estimates, the variance of the estimates, and the sensitivity of the estimates to limitations of available data. The rank and replace and combined methods (but not the propensity score method) are concordant with the IOM definition of racial disparities in that each creates a comparison group with the appropriate marginal distributions of health status and SES variables. Predicted disparities and prediction variances were similar for the rank and replace and combined methods, but the rank and replace method was sensitive to limitations on SES information. For all methods, limiting health status information significantly reduced estimates of disparities compared to a more comprehensive dataset. We conclude that the two IOM-concordant methods were similar enough that either could be considered in disparity predictions. In datasets with limited SES information, the combined method is the better choice.
Kuroda, S.; Okugi, T.; Tauchi, T.; Fujisawa, H.; Ichikawa, M.; Iwashita, Y.; Tajima, Y.; Kumada, M.; Spencer, Cherrill M.; /SLAC
2008-01-18
An adjustable permanent magnet quadrupole has been developed for the final focus (FF) in a linear collider. Recent activities include a newly fabricated inner ring to demonstrate the strongest field gradient at a smaller bore diameter of 15mm and a magnetic field measurement system with a new rotating coil. The prospects of the R&D will be discussed.
Models for cultural inheritance: a general linear model.
Feldman, M W; Cavalli-Sforza, L L
1975-07-01
A theory of cultural evolution is proposed based on a general linear mode of cultural transmission. The trait of an individual is assumed to depend on the values of the same trait in other individuals of the same, the previous or earlier generation. The transmission matrix W has as its elements the proportional contributions of each individual (i) of one generation to each individual (j) of another. In addition, there is random variation (copy error or innovation) for each individual. Means and variances of a group of N individuals change with time and will stabilize asymptotically if the matrix W is irreducible and aperiodic. The rate of convergence is geometric and is governed by the largest non-unit eigenvalue of W. Groups fragment and evolve independently if W is reducible. The means of independent groups vary at random at a predicted rate, a phenomenon termed "random cultural drift". Variances within a group tend to be small, assuming cultural homogeneity. Transmission matrices of the teacher/leader type, and of parental type have been specifically considered, as well as social hierarchies. Various limitations, extensions, and some chances of application are discussed.
Bayesian Inference for Generalized Linear Models for Spiking Neurons
Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias
2010-01-01
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627
A general protocol to afford enantioenriched linear homoprenylic amines.
Bosque, Irene; Foubelo, Francisco; Gonzalez-Gomez, Jose C
2013-11-21
The reaction of a readily obtained chiral branched homoprenylamonium salt with a range of aldehydes, including aliphatic substrates, affords the corresponding linear isomers in good yields and enantioselectivities.
Technology Transfer Automated Retrieval System (TEKTRAN)
A stochastic/linear program Excel workbook was developed consisting of two worksheets illustrating linear and stochastic program approaches. Both approaches used the Excel Solver add-in. A published linear program problem served as an example for the ingredients, nutrients and costs and as a benchma...
Nonparametric Covariate-Adjusted Association Tests Based on the Generalized Kendall’s Tau*
Zhu, Wensheng; Jiang, Yuan; Zhang, Heping
2012-01-01
Identifying the risk factors for comorbidity is important in psychiatric research. Empirically, studies have shown that testing multiple, correlated traits simultaneously is more powerful than testing a single trait at a time in association analysis. Furthermore, for complex diseases, especially mental illnesses and behavioral disorders, the traits are often recorded in different scales such as dichotomous, ordinal and quantitative. In the absence of covariates, nonparametric association tests have been developed for multiple complex traits to study comorbidity. However, genetic studies generally contain measurements of some covariates that may affect the relationship between the risk factors of major interest (such as genes) and the outcomes. While it is relatively easy to adjust these covariates in a parametric model for quantitative traits, it is challenging for multiple complex traits with possibly different scales. In this article, we propose a nonparametric test for multiple complex traits that can adjust for covariate effects. The test aims to achieve an optimal scheme of adjustment by using a maximum statistic calculated from multiple adjusted test statistics. We derive the asymptotic null distribution of the maximum test statistic, and also propose a resampling approach, both of which can be used to assess the significance of our test. Simulations are conducted to compare the type I error and power of the nonparametric adjusted test to the unadjusted test and other existing adjusted tests. The empirical results suggest that our proposed test increases the power through adjustment for covariates when there exist environmental effects, and is more robust to model misspecifications than some existing parametric adjusted tests. We further demonstrate the advantage of our test by analyzing a data set on genetics of alcoholism. PMID:22745516
GENERAL: Linear Optical Scheme for Implementing Optimal Real State Cloning
NASA Astrophysics Data System (ADS)
Wan, Hong-Bo; Ye, Liu
2010-06-01
We propose an experimental scheme for implementing the optimal 1 → 3 real state cloning via linear optical elements. This method relies on one polarized qubit and two location qubits and is feasible with current experimental technology.
Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs
ERIC Educational Resources Information Center
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-01-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…
PYESSENCE: Generalized Coupled Quintessence Linear Perturbation Python Code
NASA Astrophysics Data System (ADS)
Leithes, Alexander
2016-09-01
PYESSENCE evolves linearly perturbed coupled quintessence models with multiple (cold dark matter) CDM fluid species and multiple DE (dark energy) scalar fields, and can be used to generate quantities such as the growth factor of large scale structure for any coupled quintessence model with an arbitrary number of fields and fluids and arbitrary couplings.
A General Linear Method for Equating with Small Samples
ERIC Educational Resources Information Center
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
Guisan, A.; Edwards, T.C.; Hastie, T.
2002-01-01
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.
Rossi, D J; Kress, D D; Tess, M W; Burfening, P J
1992-05-01
Standard linear adjustment of weaning weight to a constant age has been shown to introduce bias in the adjusted weight due to nonlinear growth from birth to weaning of beef calves. Ten years of field records from the five strains of Beefbooster Cattle Alberta Ltd. seed stock herds were used to investigate the use of correction factors to adjust standard 180-d weight (WT180) for this bias. Statistical analyses were performed within strain and followed three steps: 1) the full data set was split into an estimation set (ES) and a validation set (VS), 2) WT180 from the ES was used to develop estimates of correction factors using a model including herd (H), year (YR), age of dam (DA), sex of calf (S), all two and three-way interactions, and any significant linear and quadratic covariates of calf age at weaning deviated from 180 d (DEVCA) and interactions between DEVCA and DA, S or DA x S, and 3) significant DEVCA coefficients were used to correct WT180 from the VS, then WT180 and the corrected weight (WTCOR) from the VS were analyzed with the same model as in Step 2 and significance of DEVCA terms were compared. Two types of data splitting were used. Adjusted R2 was calculated to describe the proportion of total variation of DEVCA terms explained for WT180 from the ES. The DEVCA terms explained .08 to 1.54% of the total variation for the five strains. Linear and quadratic correction factors were both positive and negative. Bias in WT180 from the ES within 180 +/- 35 d of age ranged from 2.8 to 21.7 kg.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1526901
Generalizing a categorization of students' interpretations of linear kinematics graphs
NASA Astrophysics Data System (ADS)
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-06-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.
The general RF tuning for IH-DTL linear accelerators
NASA Astrophysics Data System (ADS)
Lu, Y. R.; Ratzinger, U.; Schlitt, B.; Tiede, R.
2007-11-01
The RF tuning is the most important research for achieving the resonant frequency and the flatness of electric field distributions along the axis of RF accelerating structures. The six different tuning concepts and that impacts on the longitudinal field distributions have been discussed in detail combining the RF tuning process of a 1:2 modeled 20.85 MV compact IH-DTL cavity, which was designed to accelerate proton, helium, oxygen or C 4+ from 400 keV/ u to 7 MeV/u and used as the linear injector of 430 MeV/ u synchrotron [Y.R. Lu, S. Minaev, U. Ratzinger, B. Schlitt, R.Tiede, The Compact 20MV IH-DTL for the Heidelberg Therapy Facility, in: Proceedings of the LINAC Conference, Luebeck, Germany, 2004 [1]; Y.R. Lu, Frankfurt University Dissertation, 2005. [2
Jian, Shih-Jie; Kou, Chwung-Shan; Hwang, Jennchang; Lee, Chein-Dhau; Lin, Wei-Cheng
2013-06-15
A method for controlling the pretilt angles of liquid crystals (LC) was developed. Hexamethyldisiloxane polymer films were first deposited on indium tin oxide coated glass plates using a linear atmospheric pressure plasma source. The films were subsequently treated with the rubbing method for LC alignment. Fourier transform infrared spectroscopy and X-ray photoelectron spectroscopy measurements were used to characterize the film composition, which could be varied to control the surface energy by adjusting the monomer feed rate and input power. The results of LC alignment experiments showed that the pretilt angle continuously increased from 0 Degree-Sign to 90 Degree-Sign with decreasing film surface energy.
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
NASA Astrophysics Data System (ADS)
Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.
2016-07-01
To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were
Computer analysis of general linear networks using digraphs.
NASA Technical Reports Server (NTRS)
Mcclenahan, J. O.; Chan, S.-P.
1972-01-01
Investigation of the application of digraphs in analyzing general electronic networks, and development of a computer program based on a particular digraph method developed by Chen. The Chen digraph method is a topological method for solution of networks and serves as a shortcut when hand calculations are required. The advantage offered by this method of analysis is that the results are in symbolic form. It is limited, however, by the size of network that may be handled. Usually hand calculations become too tedious for networks larger than about five nodes, depending on how many elements the network contains. Direct determinant expansion for a five-node network is a very tedious process also.
Sigurdson, J F; Wallander, J; Sund, A M
2014-10-01
The aim was to examine prospectively associations between bullying involvement at 14-15 years of age and self-reported general health and psychosocial adjustment in young adulthood, at 26-27 years of age. A large representative sample (N=2,464) was recruited and assessed in two counties in Mid-Norway in 1998 (T1) and 1999/2000 (T2) when the respondents had a mean age of 13.7 and 14.9, respectively, leading to classification as being bullied, bully-victim, being aggressive toward others or non-involved. Information about general health and psychosocial adjustment was gathered at a follow-up in 2012 (T4) (N=1,266) with a respondent mean age of 27.2. Logistic regression and ANOVA analyses showed that groups involved in bullying of any type in adolescence had increased risk for lower education as young adults compared to those non-involved. The group aggressive toward others also had a higher risk of being unemployed and receiving any kind of social help. Compared with the non-involved, those being bullied and bully-victims had increased risk of poor general health and high levels of pain. Bully-victims and those aggressive toward others during adolescence subsequently had increased risk of tobacco use and lower job functioning than non-involved. Further, those being bullied and aggressive toward others had increased risk of illegal drug use. Relations to live-in spouse/partner were poorer among those being bullied. Involvement in bullying, either as victim or perpetrator, has significant social costs even 12 years after the bullying experience. Accordingly, it will be important to provide early intervention for those involved in bullying in adolescence.
Sigurdson, J F; Wallander, J; Sund, A M
2014-10-01
The aim was to examine prospectively associations between bullying involvement at 14-15 years of age and self-reported general health and psychosocial adjustment in young adulthood, at 26-27 years of age. A large representative sample (N=2,464) was recruited and assessed in two counties in Mid-Norway in 1998 (T1) and 1999/2000 (T2) when the respondents had a mean age of 13.7 and 14.9, respectively, leading to classification as being bullied, bully-victim, being aggressive toward others or non-involved. Information about general health and psychosocial adjustment was gathered at a follow-up in 2012 (T4) (N=1,266) with a respondent mean age of 27.2. Logistic regression and ANOVA analyses showed that groups involved in bullying of any type in adolescence had increased risk for lower education as young adults compared to those non-involved. The group aggressive toward others also had a higher risk of being unemployed and receiving any kind of social help. Compared with the non-involved, those being bullied and bully-victims had increased risk of poor general health and high levels of pain. Bully-victims and those aggressive toward others during adolescence subsequently had increased risk of tobacco use and lower job functioning than non-involved. Further, those being bullied and aggressive toward others had increased risk of illegal drug use. Relations to live-in spouse/partner were poorer among those being bullied. Involvement in bullying, either as victim or perpetrator, has significant social costs even 12 years after the bullying experience. Accordingly, it will be important to provide early intervention for those involved in bullying in adolescence. PMID:24972719
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Stiros, Stathis
2012-11-01
The adjustment of systems of highly non-linear, redundant equations, deriving from observations of certain geophysical processes and geodetic data cannot be based on conventional least-squares techniques, and is based on various numerical inversion techniques. Still these techniques lead to solutions trapped in local minima, to correlated estimates and to solution with poor error control. To overcome these problems, we propose an alternative numerical-topological approach inspired by lighthouse beacon navigation, usually used in 2-D, low-accuracy applications. In our approach, an m-dimensional grid G of points around the real solution (an m-dimensional vector) is at first specified. Then, for each equation an uncertainty is assigned to the corresponding measurement, and the sets of the grid points which satisfy the condition are detected. This process is repeated for all equations, and the common section A of the sets of grid points is defined. From this set of grid points, which define a space including the real solution, we compute its center of weight, which corresponds to an estimate of the solution, and its variance-covariance matrix. An optimal solution can be obtained through optimization of the uncertainty in each observation. The efficiency of the overall process was assessed in comparison with conventional least squares adjustment.
The Generalized Logit-Linear Item Response Model for Binary-Designed Items
ERIC Educational Resources Information Center
Revuelta, Javier
2008-01-01
This paper introduces the generalized logit-linear item response model (GLLIRM), which represents the item-solving process as a series of dichotomous operations or steps. The GLLIRM assumes that the probability function of the item response is a logistic function of a linear composite of basic parameters which describe the operations, and the…
NASA Astrophysics Data System (ADS)
Fan, Ya-Jing; Cao, Huai-Xin; Meng, Hui-Xian; Chen, Liang
2016-09-01
The uncertainty principle in quantum mechanics is a fundamental relation with different forms, including Heisenberg's uncertainty relation and Schrödinger's uncertainty relation. In this paper, we prove a Schrödinger-type uncertainty relation in terms of generalized metric adjusted skew information and correlation measure by using operator monotone functions, which reads, U_ρ ^{(g,f)}(A)U_ρ ^{(g,f)}(B)≥ f(0)^2l/k| {Corr}_ρ ^{s(g,f)}(A,B)| ^2 for some operator monotone functions f and g, all n-dimensional observables A, B and a non-singular density matrix ρ . As applications, we derive some new uncertainty relations for Wigner-Yanase skew information and Wigner-Yanase-Dyson skew information.
Generalized Functional Linear Models for Gene-based Case-Control Association Studies
Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao
2014-01-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683
Code of Federal Regulations, 2011 CFR
2011-07-01
... default rates and for appealing their consequences. 668.208 Section 668.208 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.208 General requirements for adjusting official cohort default rates and for appealing their consequences. (a) Remaining eligible. You...
Code of Federal Regulations, 2012 CFR
2012-07-01
... default rates and for appealing their consequences. 668.189 Section 668.189 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.189 General requirements for adjusting official cohort default rates and for appealing their consequences. (a)...
Code of Federal Regulations, 2014 CFR
2014-07-01
... default rates and for appealing their consequences. 668.208 Section 668.208 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.208 General requirements for adjusting official cohort default rates and for appealing their consequences. (a) Remaining eligible. You...
Code of Federal Regulations, 2012 CFR
2012-07-01
... default rates and for appealing their consequences. 668.208 Section 668.208 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.208 General requirements for adjusting official cohort default rates and for appealing their consequences. (a) Remaining eligible. You...
Code of Federal Regulations, 2013 CFR
2013-07-01
... default rates and for appealing their consequences. 668.208 Section 668.208 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.208 General requirements for adjusting official cohort default rates and for appealing their consequences. (a) Remaining eligible. You...
Code of Federal Regulations, 2010 CFR
2010-07-01
... default rates and for appealing their consequences. 668.208 Section 668.208 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Cohort Default Rates § 668.208 General requirements for adjusting official cohort default rates and for appealing their consequences. (a) Remaining eligible. You...
Code of Federal Regulations, 2014 CFR
2014-07-01
... default rates and for appealing their consequences. 668.189 Section 668.189 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.189 General requirements for adjusting official cohort default rates and for appealing their consequences. (a)...
Code of Federal Regulations, 2013 CFR
2013-07-01
... default rates and for appealing their consequences. 668.189 Section 668.189 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.189 General requirements for adjusting official cohort default rates and for appealing their consequences. (a)...
Code of Federal Regulations, 2011 CFR
2011-07-01
... default rates and for appealing their consequences. 668.189 Section 668.189 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.189 General requirements for adjusting official cohort default rates and for appealing their consequences. (a)...
Code of Federal Regulations, 2010 CFR
2010-07-01
... default rates and for appealing their consequences. 668.189 Section 668.189 Education Regulations of the... EDUCATION STUDENT ASSISTANCE GENERAL PROVISIONS Two Year Cohort Default Rates § 668.189 General requirements for adjusting official cohort default rates and for appealing their consequences. (a)...
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Generalized Signaling for Control: Evidence from Postconflict and Posterror Performance Adjustments
ERIC Educational Resources Information Center
Cho, Raymond Y.; Orr, Joseph M.; Cohen, Jonathan D.; Carter, Cameron S.
2009-01-01
Goal-directed behavior requires cognitive control to effect online adjustments in response to ongoing processing demands. How signaling for these adjustments occurs has been a question of much interest. A basic question regarding the architecture of the cognitive control system is whether such signaling for control is specific to task context or…
NASA Technical Reports Server (NTRS)
Rankin, C. C.
1988-01-01
A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.
Optimal explicit strong-stability-preserving general linear methods : complete results.
Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.
2009-03-03
This paper constructs strong-stability-preserving general linear time-stepping methods that are well suited for hyperbolic PDEs discretized by the method of lines. These methods generalize both Runge-Kutta (RK) and linear multistep schemes. They have high stage orders and hence are less susceptible than RK methods to order reduction from source terms or nonhomogeneous boundary conditions. A global optimization strategy is used to find the most efficient schemes that have low storage requirements. Numerical results illustrate the theoretical findings.
Carrasco, Josep L
2010-09-01
The classical concordance correlation coefficient (CCC) to measure agreement among a set of observers assumes data to be distributed as normal and a linear relationship between the mean and the subject and observer effects. Here, the CCC is generalized to afford any distribution from the exponential family by means of the generalized linear mixed models (GLMMs) theory and applied to the case of overdispersed count data. An example of CD34+ cell count data is provided to show the applicability of the procedure. In the latter case, different CCCs are defined and applied to the data by changing the GLMM that fits the data. A simulation study is carried out to explore the behavior of the procedure with a small and moderate sample size.
Tsai, Miao-Yu
2015-03-01
The problem of variable selection in the generalized linear-mixed models (GLMMs) is pervasive in statistical practice. For the purpose of variable selection, many methodologies for determining the best subset of explanatory variables currently exist according to the model complexity and differences between applications. In this paper, we develop a "higher posterior probability model with bootstrap" (HPMB) approach to select explanatory variables without fitting all possible GLMMs involving a small or moderate number of explanatory variables. Furthermore, to save computational load, we propose an efficient approximation approach with Laplace's method and Taylor's expansion to approximate intractable integrals in GLMMs. Simulation studies and an application of HapMap data provide evidence that this selection approach is computationally feasible and reliable for exploring true candidate genes and gene-gene associations, after adjusting for complex structures among clusters.
NASA Astrophysics Data System (ADS)
Volk, Wolfram; Suh, Joungsik
2013-12-01
The prediction of formability is one of the most important tasks in sheet metal process simulation. The common criterion in industrial applications is the Forming Limit Curve (FLC). The big advantage of FLCs is the easy interpretation of simulation or measurement data in combination with an ISO standard for the experimental determination. However, the conventional FLCs are limited to almost linear and unbroken strain paths, i.e. deformation histories with non-linear strain increments often lead to big differences in comparison to the prediction of the FLC. In this paper a phenomenological approach, the so-called Generalized Forming Limit Concept (GFLC), is introduced to predict the localized necking on arbitrary deformation history with unlimited number of non-linear strain increments. The GFLC consists of the conventional FLC and an acceptable number of experiments with bi-linear deformation history. With the idea of the new defined "Principle of Equivalent Pre-Forming" every deformation state built up of two linear strain increments can be transformed to a pure linear strain path with the same used formability of the material. In advance this procedure can be repeated as often as necessary. Therefore, it allows a robust and cost effective analysis of beginning instability in Finite Element Analysis (FEA) for arbitrary deformation histories. In addition, the GFLC is fully downwards compatible to the established FLC for pure linear strain paths.
A BGG-Type Resolution for Tensor Modules over General Linear Superalgebra
NASA Astrophysics Data System (ADS)
Cheng, Shun-Jen; Kwon, Jae-Hoon; Lam, Ngau
2008-04-01
We construct a Bernstein Gelfand Gelfand type resolution in terms of direct sums of Kac modules for the finite-dimensional irreducible tensor representations of the general linear superalgebra. As a consequence it follows that the unique maximal submodule of a corresponding reducible Kac module is generated by its proper singular vector.
Time series models based on generalized linear models: some further results.
Li, W K
1994-06-01
This paper considers the problem of extending the classical moving average models to time series with conditional distributions given by generalized linear models. These models have the advantage of easy construction and estimation. Statistical modelling techniques are also proposed. Some simulation results and an illustrative example are reported to illustrate the methodology. The models will have potential applications in longitudinal data analysis. PMID:8068850
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
ERIC Educational Resources Information Center
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
Meta-analysis of Complex Diseases at Gene Level with Generalized Functional Linear Models.
Fan, Ruzong; Wang, Yifan; Chiu, Chi-Yang; Chen, Wei; Ren, Haobo; Li, Yun; Boehnke, Michael; Amos, Christopher I; Moore, Jason H; Xiong, Momiao
2016-02-01
We developed generalized functional linear models (GFLMs) to perform a meta-analysis of multiple case-control studies to evaluate the relationship of genetic data to dichotomous traits adjusting for covariates. Unlike the previously developed meta-analysis for sequence kernel association tests (MetaSKATs), which are based on mixed-effect models to make the contributions of major gene loci random, GFLMs are fixed models; i.e., genetic effects of multiple genetic variants are fixed. Based on GFLMs, we developed chi-squared-distributed Rao's efficient score test and likelihood-ratio test (LRT) statistics to test for an association between a complex dichotomous trait and multiple genetic variants. We then performed extensive simulations to evaluate the empirical type I error rates and power performance of the proposed tests. The Rao's efficient score test statistics of GFLMs are very conservative and have higher power than MetaSKATs when some causal variants are rare and some are common. When the causal variants are all rare [i.e., minor allele frequencies (MAF) < 0.03], the Rao's efficient score test statistics have similar or slightly lower power than MetaSKATs. The LRT statistics generate accurate type I error rates for homogeneous genetic-effect models and may inflate type I error rates for heterogeneous genetic-effect models owing to the large numbers of degrees of freedom and have similar or slightly higher power than the Rao's efficient score test statistics. GFLMs were applied to analyze genetic data of 22 gene regions of type 2 diabetes data from a meta-analysis of eight European studies and detected significant association for 18 genes (P < 3.10 × 10(-6)), tentative association for 2 genes (HHEX and HMGA2; P ≈ 10(-5)), and no association for 2 genes, while MetaSKATs detected none. In addition, the traditional additive-effect model detects association at gene HHEX. GFLMs and related tests can analyze rare or common variants or a combination of the two and
Generalized model of double random phase encoding based on linear algebra
NASA Astrophysics Data System (ADS)
Nakano, Kazuya; Takeda, Masafumi; Suzuki, Hiroyuki; Yamaguchi, Masahiro
2013-01-01
We propose a generalized model for double random phase encoding (DRPE) based on linear algebra. We defined the DRPE procedure in six steps. The first three steps form an encryption procedure, while the later three steps make up a decryption procedure. We noted that the first (mapping) and second (transform) steps can be generalized. As an example of this generalization, we used 3D mapping and a transform matrix, which is a combination of a discrete cosine transform and two permutation matrices. Finally, we investigated the sensitivity of the proposed model to errors in the decryption key.
Implementing general quantum measurements on linear optical and solid-state qubits
NASA Astrophysics Data System (ADS)
Ota, Yukihiro; Ashhab, Sahel; Nori, Franco
2013-03-01
We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.
Shen, Mouquan; Park, Ju H
2016-07-01
This paper addresses the H∞ filtering of continuous Markov jump linear systems with general transition probabilities and output quantization. S-procedure is employed to handle the adverse influence of the quantization and a new approach is developed to conquer the nonlinearity induced by uncertain and unknown transition probabilities. Then, sufficient conditions are presented to ensure the filtering error system to be stochastically stable with the prescribed performance requirement. Without specified structure imposed on introduced slack variables, a flexible filter design method is established in terms of linear matrix inequalities. The effectiveness of the proposed method is validated by a numerical example. PMID:27129765
Lai, Zhi-Hui; Leng, Yong-Gang
2015-01-01
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671
Lai, Zhi-Hui; Leng, Yong-Gang
2015-01-01
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671
Lai, Zhi-Hui; Leng, Yong-Gang
2015-08-28
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.
ERIC Educational Resources Information Center
Canivez, Gary L.
2006-01-01
Replication of the core syndrome factor structure of the "Adjustment Scales for Children and Adolescents" (ASCA; P.A. McDermott, N.C. Marston, & D.H. Stott, 1993) is reported for a sample of 183 Native American Indian (Ojibwe) children and adolescents from North Central Minnesota. The six ASCA core syndromes produced an identical two-factor…
ERIC Educational Resources Information Center
Favez, N.; Reicherts, M.
2008-01-01
The aim of this research is to assess the relative influence of mothers' coping strategies in everyday life and mothers' specific coping acts on toddlers' adjustment behavior to pain and distress during a routine immunization. The population is 41 mothers with toddlers (23 girls, 18 boys; mean age, 22.7 months) undergoing a routine immunization in…
MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems
Young, D.M.; Chen, Jen Yuan
1996-11-01
This paper is concerned with the solution of the linear system Au = b, where A is a real square nonsingular matrix which is large, sparse and nonsymmetric. We consider the use of Krylov subspace methods. We first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup -1}b. The GMRES (Generalized Minimum Residual Algorithm for Solving Non Symmetric Linear Systems) method was developed by Saad and Schultz (1986) and used extensively for many years, for sparse systems. This paper considers a generalization of GMRES; it is similar to GMRES except that we let Z = A{sup T}Y, where Y is a nonsingular matrix which is symmetric but not necessarily SPD.
Local influence to detect influential data structures for generalized linear mixed models.
Ouwens, M J; Tan, F E; Berger, M P
2001-12-01
This article discusses the generalization of the local influence measures for normally distributed responses to local influence measures for generalized linear models with random effects. For these models, it is shown that the subject-oriented influence measure is a special case of the proposed observation-oriented influence measure. A two-step diagnostic procedure is proposed. The first step is to search for influential subjects. A search for influential observations is proposed as the second step. An illustration of a two-treatment, multiple-period crossover trial demonstrates the practical importance of the detection of influential observations in addition to the detection of influential subjects.
Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew
2015-09-01
Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.
Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew
2015-01-01
Summary Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012) and Lefebvre et al. (2014), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to non-collapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100 to 150 observations and 50 covariates. The method is applied to data on 15060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within thirty days of diagnosis. PMID:25899155
Fitting host-parasitoid models with CV2 > 1 using hierarchical generalized linear models.
Perry, J N; Noh, M S; Lee, Y; Alston, R D; Norowi, H M; Powell, W; Rennolls, K
2000-01-01
The powerful general Pacala-Hassell host-parasitoid model for a patchy environment, which allows host density-dependent heterogeneity (HDD) to be distinguished from between-patch, host density-independent heterogeneity (HDI), is reformulated within the class of the generalized linear model (GLM) family. This improves accessibility through the provision of general software within well-known statistical systems, and allows a rich variety of models to be formulated. Covariates such as age class, host density and abiotic factors may be included easily. For the case where there is no HDI, the formulation is a simple GLM. When there is HDI in addition to HDD, the formulation is a hierarchical generalized linear model. Two forms of HDI model are considered, both with between-patch variability: one has binomial variation within patches and one has extra-binomial, overdispersed variation within patches. Examples are given demonstrating parameter estimation with standard errors, and hypothesis testing. For one example given, the extra-binomial component of the HDI heterogeneity in parasitism is itself shown to be strongly density dependent. PMID:11416907
A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories
NASA Astrophysics Data System (ADS)
Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.; Noller, Johannes
2016-08-01
We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ``Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbations that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-16
..., Michigan. The notice was published in the Federal Register on May 28, 2010 (75 FR 30070). The notice was..., Modern Engineering/Professional Services, General Physics Corporation, Entech, and Pinnacle Technical... Modern Engineering/Professional Services, General Physics Corporation, and Entech. At the request of...
Random generalized linear model: a highly accurate and interpretable ensemble predictor
2013-01-01
Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760
Huang, Peng; Tilley, Barbara C.; Woolson, Robert F.; Lipsitz, Stuart
2010-01-01
Summary O'Brien (1984, Biometrics 40, 1079–1087) introduced a simple nonparametric test procedure for testing whether multiple outcomes in one treatment group have consistently larger values than outcomes in the other treatment group. We first explore the theoretical properties of O'Brien's test. We then extend it to the general nonparametric Behrens–Fisher hypothesis problem when no assumption is made regarding the shape of the distributions. We provide conditions when O'Brien's test controls its error probability asymptotically and when it fails. We also provide adjusted tests when the conditions do not hold. Throughout this article, we do not assume that all outcomes are continuous. Simulations are performed to compare the adjusted tests to O'Brien's test. The difference is also illustrated using data from a Parkinson's disease clinical trial. PMID:16011701
Capelli bitableaux and Z-forms of general linear Lie superalgebras.
Brini, A; Teolis, A G
1990-01-01
The combinatorics of the enveloping algebra UQ(pl(L)) of the general linear Lie superalgebra of a finite dimensional Z2-graded Q-vector space is studied. Three non-equivalent Z-forms of UQ(pl(L)) are introduced: one of these Z-forms is a version of the Kostant Z-form and the others are Lie algebra analogs of Rota and Stein's straightening formulae for the supersymmetric algebra Super[L P] and for its dual Super[L* P*]. The method is based on an extension of Capelli's technique of variabili ausiliarie to algebras containing positively and negatively signed elements. PMID:11607048
Lo, Steson; Andrews, Sally
2015-01-01
Linear mixed-effect models (LMMs) are being increasingly widely used in psychology to analyse multi-level research designs. This feature allows LMMs to address some of the problems identified by Speelman and McGann (2013) about the use of mean data, because they do not average across individual responses. However, recent guidelines for using LMM to analyse skewed reaction time (RT) data collected in many cognitive psychological studies recommend the application of non-linear transformations to satisfy assumptions of normality. Uncritical adoption of this recommendation has important theoretical implications which can yield misleading conclusions. For example, Balota et al. (2013) showed that analyses of raw RT produced additive effects of word frequency and stimulus quality on word identification, which conflicted with the interactive effects observed in analyses of transformed RT. Generalized linear mixed-effect models (GLMM) provide a solution to this problem by satisfying normality assumptions without the need for transformation. This allows differences between individuals to be properly assessed, using the metric most appropriate to the researcher's theoretical context. We outline the major theoretical decisions involved in specifying a GLMM, and illustrate them by reanalysing Balota et al.'s datasets. We then consider the broader benefits of using GLMM to investigate individual differences. PMID:26300841
Digit Span is (mostly) related linearly to general intelligence: Every extra bit of span counts.
Gignac, Gilles E; Weiss, Lawrence G
2015-12-01
Historically, Digit Span has been regarded as a relatively poor indicator of general intellectual functioning (g). In fact, Wechsler (1958) contended that beyond an average level of Digit Span performance, there was little benefit to possessing a greater memory span. Although Wechsler's position does not appear to have ever been tested empirically, it does appear to have become clinical lore. Consequently, the purpose of this investigation was to test Wechsler's contention on the Wechsler Adult Intelligence Scale-Fourth Edition normative sample (N = 1,800; ages: 16 - 69). Based on linear and nonlinear contrast analyses of means, as well as linear and nonlinear bifactor model analyses, all 3 Digit Span indicators (LDSF, LDSB, and LDSS) were found to exhibit primarily linear associations with FSIQ/g. Thus, the commonly held position that Digit Span performance beyond an average level is not indicative of greater intellectual functioning was not supported. The results are discussed in light of the increasing evidence across multiple domains that memory span plays an important role in intellectual functioning.
ERIC Educational Resources Information Center
Vos, Hans J.
1994-01-01
Describes the construction of a model of computer-assisted instruction using a qualitative block diagram based on general systems theory (GST) as a framework. Subject matter representation is discussed, and appendices include system variables and system equations of the GST model, as well as an example of developing flexible courseware. (Contains…
Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju
2014-01-01
Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973
Fomichev, S. V.; Becker, W.
2010-06-15
Both linear and nonlinear scattering and absorption of a laser pulse by spherical nanoclusters with free electrons and with a diffuse surface are considered in the collisionless hydrodynamics approximation. The developed model of forced collective motion of electrons confined to a cluster permits one consistently to introduce into the theory all the sources of nonlinearity, as well as the inhomogeneity of the cluster near its boundary. Two different perturbation theories corresponding to different laser intensity ranges are developed in this context, and both cold metal clusters and hot laser-heated or -ionized clusters are considered within the same approach. In the present article, after developing the full nonlinear model, the linear response to the laser field of the free-electron cluster with diffuse surface is investigated in detail, especially the properties of the linear Mie resonance (width and position). Under certain conditions, depending on the various cluster parameters secondary resonances are found. The properties of resonance-enhanced third-order harmonic generation and nonlinear laser absorption and their dependence on the shape of the diffuse surface will be presented separately.
Feng, Danqi; Xie, Heng; Qian, Lifen; Bai, Qinhong; Sun, Junqiang
2015-06-29
We experimentally demonstrate a novel approach for microwave frequency measurement utilizing birefringence effect in the highly non-linear fiber (HNLF). A detailed theoretical analysis is presented to implement the adjustable measurement range and resolution. By stimulating a complementary polarization-domain interferometer pair in the HNLF, a mathematical expression that relates the microwave frequency and amplitude comparison function is developed. We carry out a proof-to-concept experiment. A frequency measurement range of 2.5-30 GHz with a measurement error within 0.5 GHz is achieved except 16-17.5 GHz. This method is all-optical and requires no high-speed electronic components. PMID:26191769
Suldo, Shannon M; Shaunessy, Elizabeth; Thalji, Amanda; Michalowski, Jessica; Shaffer, Emily
2009-01-01
Navigating puberty while developing independent living skills may render adolescents particularly vulnerable to stress, which may ultimately contribute to mental health problems (Compas, Orosan, & Grant, 1993; Elgar, Arlett, & Groves, 2003). The academic transition to high school presents additional challenges as youth are required to interact with a new and larger peer group and manage greater academic expectations. For students enrolled in academically rigorous college preparatory programs, such as the International Baccalaureate (IB) program, the amount of stress perceived may be greater than typical (Suldo, Shaunessy, & Hardesty, 2008). This study investigated the environmental stressors and psychological adjustment of 162 students participating in the IB program and a comparison sample of 157 students in general education. Factor analysis indicated students experience 7 primary categories of stressors, which were examined in relation to students' adjustment specific to academic and psychological functioning. The primary source of stress experienced by IB students was related to academic requirements. In contrast, students in the general education program indicated higher levels of stressors associated with parent-child relations, academic struggles, conflict within family, and peer relations, as well as role transitions and societal problems. Comparisons of correlations between categories of stressors and students' adjustment by curriculum group reveal that students in the IB program reported more symptoms of psychopathology and reduced academic functioning as they experienced higher levels of stress, particularly stressors associated with academic requirements, transitions and societal problems, academic struggles, and extra-curricular activities. Applied implications stem from findings suggesting that students in college preparatory programs are more likely to (a) experience elevated stress related to academic demands as opposed to more typical adolescent
The heritability of general cognitive ability increases linearly from childhood to young adulthood.
Haworth, C M A; Wright, M J; Luciano, M; Martin, N G; de Geus, E J C; van Beijsterveldt, C E M; Bartels, M; Posthuma, D; Boomsma, D I; Davis, O S P; Kovas, Y; Corley, R P; Defries, J C; Hewitt, J K; Olson, R K; Rhea, S-A; Wadsworth, S J; Iacono, W G; McGue, M; Thompson, L A; Hart, S A; Petrill, S A; Lubinski, D; Plomin, R
2010-11-01
Although common sense suggests that environmental influences increasingly account for individual differences in behavior as experiences accumulate during the course of life, this hypothesis has not previously been tested, in part because of the large sample sizes needed for an adequately powered analysis. Here we show for general cognitive ability that, to the contrary, genetic influence increases with age. The heritability of general cognitive ability increases significantly and linearly from 41% in childhood (9 years) to 55% in adolescence (12 years) and to 66% in young adulthood (17 years) in a sample of 11 000 pairs of twins from four countries, a larger sample than all previous studies combined. In addition to its far-reaching implications for neuroscience and molecular genetics, this finding suggests new ways of thinking about the interface between nature and nurture during the school years. Why, despite life's 'slings and arrows of outrageous fortune', do genetically driven differences increasingly account for differences in general cognitive ability? We suggest that the answer lies with genotype-environment correlation: as children grow up, they increasingly select, modify and even create their own experiences in part based on their genetic propensities. PMID:19488046
Linear stability of a generalized multi-anticipative car following model with time delays
NASA Astrophysics Data System (ADS)
Ngoduy, D.
2015-05-01
In traffic flow, the multi-anticipative driving behavior describes the reaction of a vehicle to the driving behavior of many vehicles in front where as the time delay is defined as a physiological parameter reflecting the period of time between perceiving a stimulus of leading vehicles and performing a relevant action such as acceleration or deceleration. A lot of effort has been undertaken to understand the effects of either multi-anticipative driving behavior or time delays on traffic flow dynamics. This paper is a first attempt to analytically investigate the dynamics of a generalized class of car-following models with multi-anticipative driving behavior and different time delays associated with such multi-anticipations. To this end, this paper puts forwards to deriving the (long-wavelength) linear stability condition of such a car-following model and study how the combination of different choices of multi-anticipations and time delays affects the instabilities of traffic flow with respect to a small perturbation. It is found that the effect of delays and multi-anticipations are model-dependent, that is, the destabilization effect of delays is suppressed by the stabilization effect of multi-anticipations. Moreover, the weight factor reflecting the distribution of the driver's sensing to the relative gaps of leading vehicles is less sensitive to the linear stability condition of traffic flow than the weight factor for the relative speed of those leading vehicles.
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
NASA Astrophysics Data System (ADS)
Silbey, R.; Munn, R. W.
1980-02-01
An improved general theory of electronic transport in molecular crystals with local linear electron-phonon coupling is presented. It is valid for arbitrary electronic and phonon bandwidths and for arbitrary electron-phonon coupling strength, yielding small-polaron theory for narrow electronic bands and strong coupling, and semiconductor theory for wide electronic bands and weak coupling. Detailed results are derived for electronic excitations fully clothed with phonons and having a bandwidth no larger than the phonon frequency; the electronic and phonon densities of states are taken as Gaussian for simplicity. The dependence of the diffusion coefficient on temperature and on the other parameters is analyzed thoroughly. The calculated behavior provides a rational interpretation of observed trends in the magnitude and temperature dependence of charge-carrier drift mobilities in molecular crystals.
Generalized Linear Models for Identifying Predictors of the Evolutionary Diffusion of Viruses
Beard, Rachel; Magee, Daniel; Suchard, Marc A.; Lemey, Philippe; Scotch, Matthew
2014-01-01
Bioinformatics and phylogeography models use viral sequence data to analyze spread of epidemics and pandemics. However, few of these models have included analytical methods for testing whether certain predictors such as population density, rates of disease migration, and climate are drivers of spatial spread. Understanding the specific factors that drive spatial diffusion of viruses is critical for targeting public health interventions and curbing spread. In this paper we describe the application and evaluation of a model that integrates demographic and environmental predictors with molecular sequence data. The approach parameterizes evolutionary spread of RNA viruses as a generalized linear model (GLM) within a Bayesian inference framework using Markov chain Monte Carlo (MCMC). We evaluate this approach by reconstructing the spread of H5N1 in Egypt while assessing the impact of individual predictors on evolutionary diffusion of the virus. PMID:25717395
Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem
NASA Technical Reports Server (NTRS)
Lu, Huei-Iin; Robertson, Franklin R.
1999-01-01
A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.
NASA Astrophysics Data System (ADS)
Zhang, Ya; Tian, Yu-Ping
2010-11-01
This article studies the consensus problem for a group of sampled-data general linear dynamical agents over random communication networks. Dynamic output feedback protocols are applied to solve the consensus problem. When the sampling period is sufficiently small, it is shown that as long as the mean topology has globally reachable nodes, the mean square consensus can be achieved by selecting protocol parameters so that n - 1 specified subsystems are simultaneously stabilised. However, when the sampling period is comparatively large, it is revealed that differing from low-order integrator multi-agent systems the consensus problem may be unsolvable. By using the hybrid dynamical system theory, an allowable upper bound of sampling period is further proposed. Two approaches to designing protocols are also provided. Simulations are given to illustrate the validity of the proposed approaches.
A Bayesian approach for inducing sparsity in generalized linear models with multi-category response
2015-01-01
Background The dimension and complexity of high-throughput gene expression data create many challenges for downstream analysis. Several approaches exist to reduce the number of variables with respect to small sample sizes. In this study, we utilized the Generalized Double Pareto (GDP) prior to induce sparsity in a Bayesian Generalized Linear Model (GLM) setting. The approach was evaluated using a publicly available microarray dataset containing 99 samples corresponding to four different prostate cancer subtypes. Results A hierarchical Sparse Bayesian GLM using GDP prior (SBGG) was developed to take into account the progressive nature of the response variable. We obtained an average overall classification accuracy between 82.5% and 94%, which was higher than Support Vector Machine, Random Forest or a Sparse Bayesian GLM using double exponential priors. Additionally, SBGG outperforms the other 3 methods in correctly identifying pre-metastatic stages of cancer progression, which can prove extremely valuable for therapeutic and diagnostic purposes. Importantly, using Geneset Cohesion Analysis Tool, we found that the top 100 genes produced by SBGG had an average functional cohesion p-value of 2.0E-4 compared to 0.007 to 0.131 produced by the other methods. Conclusions Using GDP in a Bayesian GLM model applied to cancer progression data results in better subclass prediction. In particular, the method identifies pre-metastatic stages of prostate cancer with substantially better accuracy and produces more functionally relevant gene sets. PMID:26423345
Unification of the general non-linear sigma model and the Virasoro master equation
Boer, J. de; Halpern, M.B. |
1997-06-01
The Virasoro master equation describes a large set of conformal field theories known as the affine-Virasoro constructions, in the operator algebra (affinie Lie algebra) of the WZW model, while the einstein equations of the general non-linear sigma model describe another large set of conformal field theories. This talk summarizes recent work which unifies these two sets of conformal field theories, together with a presumable large class of new conformal field theories. The basic idea is to consider spin-two operators of the form L{sub ij}{partial_derivative}x{sup i}{partial_derivative}x{sup j} in the background of a general sigma model. The requirement that these operators satisfy the Virasoro algebra leads to a set of equations called the unified Einstein-Virasoro master equation, in which the spin-two spacetime field L{sub ij} cuples to the usual spacetime fields of the sigma model. The one-loop form of this unified system is presented, and some of its algebraic and geometric properties are discussed.
On a general theory for compressing process and aeroacoustics: linear analysis
NASA Astrophysics Data System (ADS)
Mao, F.; Shi, Y. P.; Wu, J. Z.
2010-06-01
Of the three mutually coupled fundamental processes (shearing, compressing, and thermal) in a general fluid motion, only the general formulation for the compressing process and a subprocess of it, the subject of aeroacoustics, as well as their physical coupling with shearing and thermal processes, have so far not reached a consensus. This situation has caused difficulties for various in-depth complex multiprocess flow diagnosis, optimal configuration design, and flow/noise control. As the first step toward the desired formulation in fully nonlinear regime, this paper employs the operator factorization method to revisit the analytic linear theories of the fundamental processes and their decomposition, especially the further splitting of compressing process into acoustic and entropy modes, developed in 1940s-1980s. The flow treated here is small disturbances of a compressible, viscous, and heat-conducting polytropic gas in an unbounded domain with arbitrary source of mass, external body force, and heat addition. Previous results are thereby revised and extended to a complete and unified theory. The theory provides a necessary basis and valuable guidance for developing corresponding nonlinear theory by clarifying certain basic issues, such as the proper choice of characteristic variables of compressing process and the feature of their governing equations.
Generalized linear transport theory in dilute neutral gases and dispersion relation of sound waves.
Bendib, A; Bendib-Kalache, K; Gombert, M M; Imadouchene, N
2006-10-01
The transport processes in dilute neutral gases are studied by using the kinetic equation with a collision relaxation model that meets all conservation requirements. The kinetic equation is solved keeping the whole anisotropic part of the distribution function with the use of the continued fractions. The conservative laws of the collision operator are taken into account with the projection operator techniques. The generalized heat flux and stress tensor are calculated in the linear approximation, as functions of the lower moments, i.e., the density, the flow velocity and the temperature. The results obtained are valid for arbitrary collision frequency nu with the respect to kv(t) and the characteristic frequency omega, where k(-1) is the characteristic length scale of the system and v(t) is the thermal velocity. The transport coefficients constitute accurate closure relations for the generalized hydrodynamic equations. An application to the dispersion and the attenuation of sound waves in the whole collisionality regime is presented. The results obtained are in very good agreement with the experimental data. PMID:17155048
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . PMID:26584470
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) .
Validity of tests under covariate-adaptive biased coin randomization and generalized linear models.
Shao, Jun; Yu, Xinxin
2013-12-01
Some covariate-adaptive randomization methods have been used in clinical trials for a long time, but little theoretical work has been done about testing hypotheses under covariate-adaptive randomization until Shao et al. (2010) who provided a theory with detailed discussion for responses under linear models. In this article, we establish some asymptotic results for covariate-adaptive biased coin randomization under generalized linear models with possibly unknown link functions. We show that the simple t-test without using any covariate is conservative under covariate-adaptive biased coin randomization in terms of its Type I error rate, and that a valid test using the bootstrap can be constructed. This bootstrap test, utilizing covariates in the randomization scheme, is shown to be asymptotically as efficient as Wald's test correctly using covariates in the analysis. Thus, the efficiency loss due to not using covariates in the analysis can be recovered by utilizing covariates in covariate-adaptive biased coin randomization. Our theory is illustrated with two most popular types of discrete outcomes, binary responses and event counts under the Poisson model, and exponentially distributed continuous responses. We also show that an alternative simple test without using any covariate under the Poisson model has an inflated Type I error rate under simple randomization, but is valid under covariate-adaptive biased coin randomization. Effects on the validity of tests due to model misspecification is also discussed. Simulation studies about the Type I errors and powers of several tests are presented for both discrete and continuous responses. PMID:23848580
Power Calculations for General Linear Multivariate Models Including Repeated Measures Applications.
Muller, Keith E; Lavange, Lisa M; Ramey, Sharon Landesman; Ramey, Craig T
1992-12-01
Recently developed methods for power analysis expand the options available for study design. We demonstrate how easily the methods can be applied by (1) reviewing their formulation and (2) describing their application in the preparation of a particular grant proposal. The focus is a complex but ubiquitous setting: repeated measures in a longitudinal study. Describing the development of the research proposal allows demonstrating the steps needed to conduct an effective power analysis. Discussion of the example also highlights issues that typically must be considered in designing a study. First, we discuss the motivation for using detailed power calculations, focusing on multivariate methods in particular. Second, we survey available methods for the general linear multivariate model (GLMM) with Gaussian errors and recommend those based on F approximations. The treatment includes coverage of the multivariate and univariate approaches to repeated measures, MANOVA, ANOVA, multivariate regression, and univariate regression. Third, we describe the design of the power analysis for the example, a longitudinal study of a child's intellectual performance as a function of mother's estimated verbal intelligence. Fourth, we present the results of the power calculations. Fifth, we evaluate the tradeoffs in using reduced designs and tests to simplify power calculations. Finally, we discuss the benefits and costs of power analysis in the practice of statistics. We make three recommendations: Align the design and hypothesis of the power analysis with the planned data analysis, as best as practical.Embed any power analysis in a defensible sensitivity analysis.Have the extent of the power analysis reflect the ethical, scientific, and monetary costs. We conclude that power analysis catalyzes the interaction of statisticians and subject matter specialists. Using the recent advances for power analysis in linear models can further invigorate the interaction. PMID:24790282
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
This paper discusses a FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high-performance aircraft.
MGMRES: A generalization of GMRES for solving large sparse nonsymmetric linear systems
Young, D.M.; Chen, J.Y.
1994-12-31
The authors are concerned with the solution of the linear system (1): Au = b, where A is a real square nonsingular matrix which is large, sparse and non-symmetric. They consider the use of Krylov subspace methods. They first choose an initial approximation u{sup (0)} to the solution {bar u} = A{sup {minus}1}B of (1). They also choose an auxiliary matrix Z which is nonsingular. For n = 1,2,{hor_ellipsis} they determine u{sup (n)} such that u{sup (n)} {minus} u{sup (0)}{epsilon}K{sub n}(r{sup (0)},A) where K{sub n}(r{sup (0)},A) is the (Krylov) subspace spanned by the Krylov vectors r{sup (0)}, Ar{sup (0)}, {hor_ellipsis}, A{sup n{minus}1}r{sup 0} and where r{sup (0)} = b{minus}Au{sup (0)}. If ZA is SPD they also require that (u{sup (n)}{minus}{bar u}, ZA(u{sup (n)}{minus}{bar u})) be minimized. If, on the other hand, ZA is not SPD, then they require that the Galerkin condition, (Zr{sup n}, v) = 0, be satisfied for all v{epsilon}K{sub n}(r{sup (0)}, A) where r{sup n} = b{minus}Au{sup (n)}. In this paper the authors consider a generalization of GMRES. This generalized method, which they refer to as `MGMRES`, is very similar to GMRES except that they let Z = A{sup T}Y where Y is a nonsingular matrix which is symmetric by not necessarily SPD.
NASA Astrophysics Data System (ADS)
Elliott, J.; de Souza, R. S.; Krone-Martins, A.; Cameron, E.; Ishida, E. E. O.; Hilbe, J.
2015-04-01
Machine learning techniques offer a precious tool box for use within astronomy to solve problems involving so-called big data. They provide a means to make accurate predictions about a particular system without prior knowledge of the underlying physical processes of the data. In this article, and the companion papers of this series, we present the set of Generalized Linear Models (GLMs) as a fast alternative method for tackling general astronomical problems, including the ones related to the machine learning paradigm. To demonstrate the applicability of GLMs to inherently positive and continuous physical observables, we explore their use in estimating the photometric redshifts of galaxies from their multi-wavelength photometry. Using the gamma family with a log link function we predict redshifts from the PHoto-z Accuracy Testing simulated catalogue and a subset of the Sloan Digital Sky Survey from Data Release 10. We obtain fits that result in catastrophic outlier rates as low as ∼1% for simulated and ∼2% for real data. Moreover, we can easily obtain such levels of precision within a matter of seconds on a normal desktop computer and with training sets that contain merely thousands of galaxies. Our software is made publicly available as a user-friendly package developed in Python, R and via an interactive web application. This software allows users to apply a set of GLMs to their own photometric catalogues and generates publication quality plots with minimum effort. By facilitating their ease of use to the astronomical community, this paper series aims to make GLMs widely known and to encourage their implementation in future large-scale projects, such as the Large Synoptic Survey Telescope.
Geedipally, Srinivas Reddy; Lord, Dominique; Dhavala, Soma Sekhar
2012-03-01
There has been a considerable amount of work devoted by transportation safety analysts to the development and application of new and innovative models for analyzing crash data. One important characteristic about crash data that has been documented in the literature is related to datasets that contained a large amount of zeros and a long or heavy tail (which creates highly dispersed data). For such datasets, the number of sites where no crash is observed is so large that traditional distributions and regression models, such as the Poisson and Poisson-gamma or negative binomial (NB) models cannot be used efficiently. To overcome this problem, the NB-Lindley (NB-L) distribution has recently been introduced for analyzing count data that are characterized by excess zeros. The objective of this paper is to document the application of a NB generalized linear model with Lindley mixed effects (NB-L GLM) for analyzing traffic crash data. The study objective was accomplished using simulated and observed datasets. The simulated dataset was used to show the general performance of the model. The model was then applied to two datasets based on observed data. One of the dataset was characterized by a large amount of zeros. The NB-L GLM was compared with the NB and zero-inflated models. Overall, the research study shows that the NB-L GLM not only offers superior performance over the NB and zero-inflated models when datasets are characterized by a large number of zeros and a long tail, but also when the crash dataset is highly dispersed. PMID:22269508
Generalized Jeans' Escape of Pick-Up Ions in Quasi-Linear Relaxation
NASA Technical Reports Server (NTRS)
Moore, T. E.; Khazanov, G. V.
2011-01-01
Jeans escape is a well-validated formulation of upper atmospheric escape that we have generalized to estimate plasma escape from ionospheres. It involves the computation of the parts of particle velocity space that are unbound by the gravitational potential at the exobase, followed by a calculation of the flux carried by such unbound particles as they escape from the potential well. To generalize this approach for ions, we superposed an electrostatic ambipolar potential and a centrifugal potential, for motions across and along a divergent magnetic field. We then considered how the presence of superthermal electrons, produced by precipitating auroral primary electrons, controls the ambipolar potential. We also showed that the centrifugal potential plays a small role in controlling the mass escape flux from the terrestrial ionosphere. We then applied the transverse ion velocity distribution produced when ions, picked up by supersonic (i.e., auroral) ionospheric convection, relax via quasi-linear diffusion, as estimated for cometary comas [1]. The results provide a theoretical basis for observed ion escape response to electromagnetic and kinetic energy sources. They also suggest that super-sonic but sub-Alfvenic flow, with ion pick-up, is a unique and important regime of ion-neutral coupling, in which plasma wave-particle interactions are driven by ion-neutral collisions at densities for which the collision frequency falls near or below the gyro-frequency. As another possible illustration of this process, the heliopause ribbon discovered by the IBEX mission involves interactions between the solar wind ions and the interstellar neutral gas, in a regime that may be analogous [2].
Fast inference in generalized linear models via expected log-likelihoods.
Ramirez, Alexandro D; Paninski, Liam
2014-04-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.
Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G
2016-09-01
A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means. PMID:26839052
Maximal freedom at minimum cost: linear large-scale structure in general modifications of gravity
Bellini, Emilio; Sawicki, Ignacy E-mail: ignacy.sawicki@outlook.com
2014-07-01
We present a turnkey solution, ready for implementation in numerical codes, for the study of linear structure formation in general scalar-tensor models involving a single universally coupled scalar field. We show that the totality of cosmological information on the gravitational sector can be compressed — without any redundancy — into five independent and arbitrary functions of time only and one constant. These describe physical properties of the universe: the observable background expansion history, fractional matter density today, and four functions of time describing the properties of the dark energy. We show that two of those dark-energy property functions control the existence of anisotropic stress, the other two — dark-energy clustering, both of which are can be scale-dependent. All these properties can in principle be measured, but no information on the underlying theory of acceleration beyond this can be obtained. We present a translation between popular models of late-time acceleration (e.g. perfect fluids, f(R), kinetic gravity braiding, galileons), as well as the effective field theory framework, and our formulation. In this way, implementing this formulation numerically would give a single tool which could consistently test the majority of models of late-time acceleration heretofore proposed.
Use of the generalized linear models in data related to dental caries index.
Javali, S B; Pandit, Parameshwar V
2007-01-01
The aim of this study is to encourage and initiate the application of generalized linear models (GLMs) in the analysis of the covariates of decayed, missing, and filled teeth (DMFT) index data, which is not necessarily normally distributed. GLMs can be performed assuming underlying many distributions; in fact Poisson distribution with log built-in link function and binomial distribution with Logit and Probit built-in link functions are considered. The Poisson model is used for modeling the DMFT index data and the Logit and Probit models are employed to model the dichotomous outcome of DMFT = 0 and DMFT not equal to 0 (caries free/caries present). The data comprised 7188 subjects aged 18-30 years from the study on the oral health status of Karnataka state conducted by SDM College of Dental Sciences and Hospital, Dharwad, Karnataka, India. The Poisson model and binomial models (Logit and Probit) displayed dissimilarity in the outcome of results at 5% level of significance ( P <0.05). The binomial models were a poor fit, whereas the Poisson model showed a good fit for the DMFT index data. Therefore, a suitable modeling approach for DMFT index data is to use a Poisson model for the DMFT response and a binomial model for the caries free and caries present (DMFT = 0 and DMFT not equal to 0). These GLMs allow separate estimation of those covariates which influence the magnitude of caries. PMID:17938491
Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G
2016-09-01
A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.
NASA Astrophysics Data System (ADS)
García-Díaz, J. Carlos
2009-11-01
Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.
A general parallel sparse-blocked matrix multiply for linear scaling SCF theory
NASA Astrophysics Data System (ADS)
Challacombe, Matt
2000-06-01
A general approach to the parallel sparse-blocked matrix-matrix multiply is developed in the context of linear scaling self-consistent-field (SCF) theory. The data-parallel message passing method uses non-blocking communication to overlap computation and communication. The space filling curve heuristic is used to achieve data locality for sparse matrix elements that decay with “separation”. Load balance is achieved by solving the bin packing problem for blocks with variable size.With this new method as the kernel, parallel performance of the simplified density matrix minimization (SDMM) for solution of the SCF equations is investigated for RHF/6-31G ∗∗ water clusters and RHF/3-21G estane globules. Sustained rates above 5.7 GFLOPS for the SDMM have been achieved for (H 2 O) 200 with 95 Origin 2000 processors. Scalability is found to be limited by load imbalance, which increases with decreasing granularity, due primarily to the inhomogeneous distribution of variable block sizes.
Master equation solutions in the linear regime of characteristic formulation of general relativity
NASA Astrophysics Data System (ADS)
Cedeño M., C. E.; de Araujo, J. C. N.
2015-12-01
From the field equations in the linear regime of the characteristic formulation of general relativity, Bishop, for a Schwarzschild's background, and Mädler, for a Minkowski's background, were able to show that it is possible to derive a fourth order ordinary differential equation, called master equation, for the J metric variable of the Bondi-Sachs metric. Once β , another Bondi-Sachs potential, is obtained from the field equations, and J is obtained from the master equation, the other metric variables are solved integrating directly the rest of the field equations. In the past, the master equation was solved for the first multipolar terms, for both the Minkowski's and Schwarzschild's backgrounds. Also, Mädler recently reported a generalisation of the exact solutions to the linearised field equations when a Minkowski's background is considered, expressing the master equation family of solutions for the vacuum in terms of Bessel's functions of the first and the second kind. Here, we report new solutions to the master equation for any multipolar moment l , with and without matter sources in terms only of the first kind Bessel's functions for the Minkowski, and in terms of the Confluent Heun's functions (Generalised Hypergeometric) for radiative (nonradiative) case in the Schwarzschild's background. We particularize our families of solutions for the known cases for l =2 reported previously in the literature and find complete agreement, showing the robustness of our results.
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
Assessment of cross-frequency coupling with confidence using generalized linear models
Kramer, M. A.; Eden, U. T.
2013-01-01
Background Brain voltage activity displays distinct neuronal rhythms spanning a wide frequency range. How rhythms of different frequency interact – and the function of these interactions – remains an active area of research. Many methods have been proposed to assess the interactions between different frequency rhythms, in particular measures that characterize the relationship between the phase of a low frequency rhythm and the amplitude envelope of a high frequency rhythm. However, an optimal analysis method to assess this cross-frequency coupling (CFC) does not yet exist. New Method Here we describe a new procedure to assess CFC that utilizes the generalized linear modeling (GLM) framework. Results We illustrate the utility of this procedure in three synthetic examples. The proposed GLM-CFC procedure allows a rapid and principled assessment of CFC with confidence bounds, scales with the intensity of the CFC, and accurately detects biphasic coupling. Comparison with Existing Methods Compared to existing methods, the proposed GLM-CFC procedure is easily interpretable, possesses confidence intervals that are easy and efficient to compute, and accurately detects biphasic coupling. Conclusions The GLM-CFC statistic provides a method for accurate and statistically rigorous assessment of CFC. PMID:24012829
Yock, Adam D. Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.
2014-05-15
Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography
NASA Astrophysics Data System (ADS)
Yu, Chih-Jen; Chou, Chien
2011-03-01
An equivalence theory based on a unitary optical system of a generalized elliptical phase retarder was derived. Whereas the elliptical phase retarder can be treated as the combination of a linear phase retarder and a polarization rotator equivalently. Three fundamental parameters, including the elliptical phase retardation, the azimuth angle and the ellipticity angle of the fast elliptical eigen-polarization state were derived. All parameters of a generalized elliptical phase retarder can be determined from the analytical solution of the characteristic parameters of the optical components: linear phase retardation and fast axis angle of the equivalently linear phase retarder respectively, and polarization rotation angle of an equivalent polarization rotator. In this study, the experimental verification was demonstrated by testing a twisted nematic liquid crystal device (TNLCD) treated as a generalized elliptical phase retarder. A dual-frequency heterodyne ellipsometer was setup and the experimental result demonstrates the capability of the equivalent theory on elliptical birefringence measurement at high sensitivity by using heterodyne technique.
Protein structure validation by generalized linear model root-mean-square deviation prediction.
Bagaria, Anurag; Jaravine, Victor; Huang, Yuanpeng J; Montelione, Gaetano T; Güntert, Peter
2012-02-01
Large-scale initiatives for obtaining spatial protein structures by experimental or computational means have accentuated the need for the critical assessment of protein structure determination and prediction methods. These include blind test projects such as the critical assessment of protein structure prediction (CASP) and the critical assessment of protein structure determination by nuclear magnetic resonance (CASD-NMR). An important aim is to establish structure validation criteria that can reliably assess the accuracy of a new protein structure. Various quality measures derived from the coordinates have been proposed. A universal structural quality assessment method should combine multiple individual scores in a meaningful way, which is challenging because of their different measurement units. Here, we present a method based on a generalized linear model (GLM) that combines diverse protein structure quality scores into a single quantity with intuitive meaning, namely the predicted coordinate root-mean-square deviation (RMSD) value between the present structure and the (unavailable) "true" structure (GLM-RMSD). For two sets of structural models from the CASD-NMR and CASP projects, this GLM-RMSD value was compared with the actual accuracy given by the RMSD value to the corresponding, experimentally determined reference structure from the Protein Data Bank (PDB). The correlation coefficients between actual (model vs. reference from PDB) and predicted (model vs. "true") heavy-atom RMSDs were 0.69 and 0.76, for the two datasets from CASD-NMR and CASP, respectively, which is considerably higher than those for the individual scores (-0.24 to 0.68). The GLM-RMSD can thus predict the accuracy of protein structures more reliably than individual coordinate-based quality scores.
Use of generalized linear models and digital data in a forest inventory of Northern Utah
Moisen, G.G.; Edwards, T.C.
1999-01-01
Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.
Power analysis for generalized linear mixed models in ecology and evolution
Johnson, Paul C D; Barry, Sarah J E; Ferguson, Heather M; Müller, Pie
2015-01-01
‘Will my study answer my research question?’ is the most fundamental question a researcher can ask when designing a study, yet when phrased in statistical terms – ‘What is the power of my study?’ or ‘How precise will my parameter estimate be?’ – few researchers in ecology and evolution (EE) try to answer it, despite the detrimental consequences of performing under- or over-powered research. We suggest that this reluctance is due in large part to the unsuitability of simple methods of power analysis (broadly defined as any attempt to quantify prospectively the ‘informativeness’ of a study) for the complex models commonly used in EE research. With the aim of encouraging the use of power analysis, we present simulation from generalized linear mixed models (GLMMs) as a flexible and accessible approach to power analysis that can account for random effects, overdispersion and diverse response distributions.We illustrate the benefits of simulation-based power analysis in two research scenarios: estimating the precision of a survey to estimate tick burdens on grouse chicks and estimating the power of a trial to compare the efficacy of insecticide-treated nets in malaria mosquito control. We provide a freely available R function, sim.glmm, for simulating from GLMMs.Analysis of simulated data revealed that the effects of accounting for realistic levels of random effects and overdispersion on power and precision estimates were substantial, with correspondingly severe implications for study design in the form of up to fivefold increases in sampling effort. We also show the utility of simulations for identifying scenarios where GLMM-fitting methods can perform poorly.These results illustrate the inadequacy of standard analytical power analysis methods and the flexibility of simulation-based power analysis for GLMMs. The wider use of these methods should contribute to improving the quality of study design in EE. PMID:25893088
The linear co-variance between joint muscle torques is not a generalized principle.
Sande de Souza, Luciane Aparecida Pascucci; Dionísio, Valdeci Carlos; Lerena, Mario Adrian Misailidis; Marconi, Nadia Fernanda; Almeida, Gil Lúcio
2009-06-01
In 1996, Gottlieb et al. [Gottlieb GL, Song Q, Hong D, Almeida GL, Corcos DM. Coordinating movement at two joints: A principle of linear covariance. J Neurophysiol 1996;75(4):1760-4] identified a linear co-variance between the joint muscle torques generated at two connected joints. The joint muscle torques changed directions and magnitudes in a synchronized and linear fashion and called it the principle of linear co-variance. Here we showed that this principle cannot hold for some class of movements. Neurologically normal subjects performed multijoint movements involving elbow and shoulder with reversal towards three targets in the sagittal plane without any constraints. The movement kinematics was calculated using the X and Y coordinates of the markers positioned over the joints. Inverse dynamics was used to calculate the joint muscle, interaction and net torques. We found that for the class of voluntary movements analyzed, the joint muscle torques of the elbow and the shoulder were not linearly correlated. The same was observed for the interaction torques. But, the net torques at both joints, i.e., the sum of the interaction and the joint muscle torques were linearly correlated. We showed that by decoupling the joint muscle torques, but keeping the net torques linearly correlated, the CNS was able to generate fast and accurate movements with straight fingertip paths. The movement paths were typical of the ones in which the joint muscle torques were linearly correlated.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
General methods for determining the linear stability of coronal magnetic fields
NASA Technical Reports Server (NTRS)
Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.
1988-01-01
A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.
Generalizations of the theorem of minimum entropy production to linear systems involving inertia
NASA Astrophysics Data System (ADS)
Rebhan, E.
1985-07-01
The temporal behavior of the excess entropy production Pex is investigated in linear electrical networks and in systems which can be described either by the linearized equations of viscous hydrodynamics or of resistive magnetohydrodynamics. As a result of inertial effects Pex is an oscillatory quantity. A kinetic potential is constructed which contains Pex additively. It is an upper bound of Pex and decreases monotonically in time, enforcing Pex-->0 as t-->∞.
NASA Astrophysics Data System (ADS)
Begley, Matthew R.; Creton, Costantino; McMeeking, Robert M.
2015-11-01
A general asymptotic plane strain crack tip stress field is constructed for linear versions of neo-Hookean materials, which spans a wide variety of special cases including incompressible Mooney elastomers, the compressible Blatz-Ko elastomer, several cases of the Ogden constitutive law and a new result for a compressible linear neo-Hookean material. The nominal stress field has dominant terms that have a square root singularity with respect to the distance of material points from the crack tip in the undeformed reference configuration. At second order, there is a uniform tension parallel to the crack. The associated displacement field in plane strain at leading order has dependence proportional to the square root of the same coordinate. The relationship between the amplitude of the crack tip singularity (a stress intensity factor) and the plane strain energy release rate is outlined for the general linear material, with simplified relationships presented for notable special cases.
NASA Astrophysics Data System (ADS)
Irmak, Suat; Mutiibwa, Denis
2010-08-01
The 1-D and single layer combination-based energy balance Penman-Monteith (PM) model has limitations in practical application due to the lack of canopy resistance (rc) data for different vegetation surfaces. rc could be estimated by inversion of the PM model if the actual evapotranspiration (ETa) rate is known, but this approach has its own set of issues. Instead, an empirical method of estimating rc is suggested in this study. We investigated the relationships between primary micrometeorological parameters and rc and developed seven models to estimate rc for a nonstressed maize canopy on an hourly time step using a generalized-linear modeling approach. The most complex rc model uses net radiation (Rn), air temperature (Ta), vapor pressure deficit (VPD), relative humidity (RH), wind speed at 3 m (u3), aerodynamic resistance (ra), leaf area index (LAI), and solar zenith angle (Θ). The simplest model requires Rn, Ta, and RH. We present the practical implementation of all models via experimental validation using scaled up rc data obtained from the dynamic diffusion porometer-measured leaf stomatal resistance through an extensive field campaign in 2006. For further validation, we estimated ETa by solving the PM model using the modeled rc from all seven models and compared the PM ETa estimates with the Bowen ratio energy balance system (BREBS)-measured ETa for an independent data set in 2005. The relationships between hourly rc versus Ta, RH, VPD, Rn, incoming shortwave radiation (Rs), u3, wind direction, LAI, Θ, and ra were presented and discussed. We demonstrated the negative impact of exclusion of LAI when modeling rc, whereas exclusion of ra and Θ did not impact the performance of the rc models. Compared to the calibration results, the validation root mean square difference between observed and modeled rc increased by 5 s m-1 for all rc models developed, ranging from 9.9 s m-1 for the most complex model to 22.8 s m-1 for the simplest model, as compared with the
Chen, Gang; Adleman, Nancy E; Saad, Ziad S; Leibenluft, Ellen; Cox, Robert W
2014-10-01
All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance-covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within-subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT) with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse-Geisser and Huynh-Feldt) with MVT-WS. To validate the MVM methodology, we performed simulations to assess the controllability for false positives and power achievement. A real FMRI dataset was analyzed to demonstrate the capability of the MVM approach. The methodology has been implemented into an open source program 3dMVM in AFNI, and all the statistical tests can be performed through symbolic coding with variable names instead of the tedious process of dummy coding. Our data indicates that the severity of sphericity violation varies substantially across brain regions. The differences among various modeling methodologies were addressed through direct comparisons between the MVM approach and some of the GLM implementations in
Shirokov, M. E.
2013-11-15
The method of complementary channel for analysis of reversibility (sufficiency) of a quantum channel with respect to families of input states (pure states for the most part) are considered and applied to Bosonic linear (quasi-free) channels, in particular, to Bosonic Gaussian channels. The obtained reversibility conditions for Bosonic linear channels have clear physical interpretation and their sufficiency is also shown by explicit construction of reversing channels. The method of complementary channel gives possibility to prove necessity of these conditions and to describe all reversed families of pure states in the Schrodinger representation. Some applications in quantum information theory are considered. Conditions for existence of discrete classical-quantum subchannels and of completely depolarizing subchannels of a Bosonic linear channel are presented.
NASA Astrophysics Data System (ADS)
Carniti, P.; Cassina, L.; Gotti, C.; Maino, M.; Pessina, G.
2016-07-01
In this work we present ALDO, an adjustable low drop-out linear regulator designed in AMS 0.35 μm CMOS technology. It is specifically tailored for use in the upgraded LHCb RICH detector in order to improve the power supply noise for the front end readout chip (CLARO). ALDO is designed with radiation-tolerant solutions such as an all-MOS band-gap voltage reference and layout techniques aiming to make it able to operate in harsh environments like High Energy Physics accelerators. It is capable of driving up to 200 mA while keeping an adequate power supply filtering capability in a very wide frequency range from 10 Hz up to 100 MHz. This property allows us to suppress the noise and high frequency spikes that could be generated by a DC/DC regulator, for example. ALDO also shows a very low noise of 11.6 μV RMS in the same frequency range. Its output is protected with over-current and short detection circuits for a safe integration in tightly packed environments. Design solutions and measurements of the first prototype are presented.
Huppert, Theodore J
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts. PMID:26989756
Recent advances toward a general purpose linear-scaling quantum force field.
Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M
2014-09-16
Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to
ERIC Educational Resources Information Center
Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer
2013-01-01
Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…
Linear and Nonlinear Optical Properties in Spherical Quantum Dots: Generalized Hulthén Potential
NASA Astrophysics Data System (ADS)
Onyeaju, M. C.; Idiodi, J. O. A.; Ikot, A. N.; Solaimani, M.; Hassanabadi, H.
2016-09-01
In this work, we studied the optical properties of spherical quantum dots confined in Hulthén potential with the appropriate centrifugal term included. The approximate solution of the bound state and wave functions were obtained from the Schrödinger wave equation by applying the factorization method. Also, we have used the density matrix formalism to investigate the linear and third-order nonlinear absorption coefficient and refractive index changes.
General-linear-models approach for comparing the response of several species in acute-toxicity tests
Daniels, K.L.; Goyert, J.C.; Farrell, M.P.; Strand, R.H.
1982-01-01
Acute toxicity tests (bioassays) estimate the concentration of a chemical required to produce a response (usually death) in fifty percent of a population (the LC50). Simple comparisons of LC5C values among several species are often inadequate because species can have identical LC50 values while their overall response to a chemical may differ in either the threshold concentration (intercept) or the rate of response (slope). A sequential approach using a general linear model is presented for testing differences among species in their overall response to a chemical. This method tests for equality of slopes followed by a test for equality of regression lines. This procedure employs the Statistical Analysis System's General Linear Models procedure for conducting a weighted least squares analysis with a convariable.
NASA Astrophysics Data System (ADS)
Rukolaine, Sergey A.
2016-05-01
In classical kinetic models a particle free path distribution is exponential, but this is more likely to be an exception than a rule. In this paper we derive a generalized linear Boltzmann equation (GLBE) for a general free path distribution in the framework of Alt's model. In the case that the free path distribution has at least first and second finite moments we construct an asymptotic solution to the initial value problem for the GLBE for small mean free paths. In the special case of the one-speed transport problem the asymptotic solution results in a diffusion approximation to the GLBE.
Hochi, Y; Kido, T; Nogawa, K; Kito, H; Shaikh, Z A
1995-01-01
To determine the maximum allowable intake limits for chronic dietary exposure to cadmium (Cd), the dose-response relationship between total Cd intake and prevalence of renal dysfunction was examined using general linear models considering the effect of age as a confounder. The target population comprised 1850 Cd-exposed and 294 non-exposed inhabitants of Ishikawa, Japan. They were divided into 96 subgroups by sex, age (four categories) cadmium concentrations in rice (three categories) and length of residence (four categories). As indicators of the cadmium-induced renal dysfunction, glucose, total protein, amino nitrogen, beta 2-microglobulin and metallothionein in urine were employed. General linear models were fitted statistically to the relationship among prevalence of renal dysfunction, sex, age and total Cd intake. Prevalence of abnormal urinary findings other than glucosuria had significant associations with total Cd intake. When total Cd intake corresponding to the mean prevalence of each abnormal urinary finding in the non-exposed subjects was calculated using general linear models, total Cd intakes corresponding to glucosuria, proteinuria, aminoaciduria (men only) and proteinuria with glucosuria were determined to be ca. 2.2-3.8 g and those corresponding to prevalence of metallothioneinuria were calculated as ca. 1.5-2.6 g. The low-molecular-weight protein in urine was confirmed to be a more sensitive indicator of renal dysfunction, and these total Cd intake values were close to those calculated previously by simple regression analysis, suggesting them to be reasonable values as the maximum allowable intake of Cd.
Koizumi, Hideya; Whitten, William B; Reilly, Pete
2008-12-01
High-resolution real-time particle mass measurements have not been achievable because the enormous amount of kinetic energy imparted to the particles upon expansion into vacuum competes with and overwhelms the forces applied to the charged particles within the mass spectrometer. It is possible to reduce the kinetic energy of a collimated particulate ion beam through collisions with a buffer gas while radially constraining their motion using a quadrupole guide or trap over a limited mass range. Controlling the pressure drop of the final expansion into a quadrupole trap permits a much broader mass range at the cost of sacrificing collimation. To achieve high-resolution mass analysis of massive particulate ions, an efficient trap with a large tolerance for radial divergence of the injected ions was developed that permits trapping a large range of ions for on-demand injection into an awaiting mass analyzer. The design specifications required that frequency of the trapping potential be adjustable to cover a large mass range and the trap radius be increased to increase the tolerance to divergent ion injection. The large-radius linear quadrupole ion trap was demonstrated by trapping singly-charged bovine serum albumin ions for on-demand injection into a mass analyzer. Additionally, this work demonstrates the ability to measure an electrophoretic mobility cross section (or ion mobility) of singly-charged intact proteins in the low-pressure regime. This work represents a large step toward the goal of high-resolution analysis of intact proteins, RNA, DNA, and viruses.
Beynon, R J
1985-01-01
Software for non-linear curve fitting has been written in BASIC to execute on the British Broadcasting Corporation Microcomputer. The program uses the direct search algorithm Pattern-search, a robust algorithm that has the additional advantage of needing specification of the function without inclusion of the partial derivatives. Although less efficient than gradient methods, the program can be readily configured to solve low-dimensional optimization problems that are normally encountered in life sciences. In writing the software, emphasis has been placed upon the 'user interface' and making the most efficient use of the facilities provided by the minimal configuration of this system.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.; Schultz, Marc R.
2012-01-01
A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.
A substructure coupling procedure applicable to general linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Howsman, T. G.; Craig, R. R., Jr.
1984-01-01
A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the non-self-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order model for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.
Marín-Sanguino, Alberto; Torres, Néstor V
2003-08-01
A new method is proposed for the optimization of biochemical systems. The method, based on the separation of the stoichiometric and kinetic aspects of the system, follows the general approach used in the previously presented indirect optimization method (IOM) developed within biochemical systems theory. It is called GMA-IOM because it makes use of the generalized mass action (GMA) as the model system representation form. The GMA representation avoids flux aggregation and thus prevents possible stoichiometric errors. The optimization of a system is used to illustrate and compare the features, advantages and shortcomings of both versions of the IOM method as a general strategy for designing improved microbial strains of biotechnological interest. Special attention has been paid to practical problems for the actual implementation of the new proposed strategy, such as the total protein content of the engineered strain or the deviation from the original steady state and its influence on cell viability.
A substructure coupling procedure applicable to general linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Howsman, T. G.; Craig, R. R., Jr.
1984-01-01
A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the nonself-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order mode for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.
ERIC Educational Resources Information Center
Thompson, Bruce
The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…
General polynomial factorization-based design of sparse periodic linear arrays.
Mitra, Sanjit K; Mondal, Kalyan; Tchobanou, Mikhail K; Dolecek, Gordana Jovanovic
2010-09-01
We have developed several methods of designing sparse periodic arrays based upon the polynomial factorization method. In these methods, transmit and receive aperture polynomials are selected such that their product results in a polynomial representing the desired combined transmit/receive (T/R) effective aperture function. A desired combined T/R effective aperture is simply an aperture with an appropriate width exhibiting a spectrum that corresponds to the desired two-way radiation pattern. At least one of the two aperture functions that constitute the combined T/R effective aperture function will be a sparse polynomial. A measure of sparsity of the designed array is defined in terms of the element reduction factor. We show that elements of a linear array can be reduced with varying degrees of beam mainlobe width to sidelobe reduction properties.
Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob
2007-01-01
For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.
Microcontroller-based intelligent low-cost-linear-sensor-camera for general edge detection
NASA Astrophysics Data System (ADS)
Hussmann, Stephan; Justen, Detlef
1997-09-01
With this paper we would like to present an intelligent low- cost-camera. Intelligent means that a microcontroller does all the controlling and provides several in- and outputs. The camera is a stand-alone system. The basic element of the camera is a linear sensor that consists of a photodiode array (PDA). In comparison with standard CCD-chips this type of sensor is a low cost component and its operation is very simple. Furthermore this paper shows the mechanical, electrical and electro-optical differences between CCD- and PDA-sensors. So the reader will be able to choose the right sensor for a particular task. Two cases of industrial applications are listed at the end of this paper.
Iterative solution of general sparse linear systems on clusters of workstations
Lo, Gen-Ching; Saad, Y.
1996-12-31
Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
2005-01-01
A three-dimensional numerical solver based on finite-difference solution of three-dimensional elastodynamic equations in generalized curvilinear coordinates has been developed and used to generate data such as radial and tangential stresses over various gear component geometries under rotation. The geometries considered are an annulus, a thin annular disk, and a thin solid disk. The solution is based on first principles and does not involve lumped parameter or distributed parameter systems approach. The elastodynamic equations in the velocity-stress formulation that are considered here have been used in the solution of problems of geophysics where non-rotating Cartesian grids are considered. For arbitrary geometries, these equations along with the appropriate boundary conditions have been cast in generalized curvilinear coordinates in the present study.
Generalized linear stability of non-inertial rimming flow in a rotating horizontal cylinder.
Aggarwal, Himanshu; Tiwari, Naveen
2015-10-01
The stability of a thin film of viscous liquid inside a horizontally rotating cylinder is studied using modal and non-modal analysis. The equation governing the film thickness is derived within lubrication approximation and up to first order in aspect ratio (average film thickness to radius of the cylinder). Effect of gravity, viscous stress and capillary pressure are considered in the model. Steady base profiles are computed in the parameter space of interest that are uniform in the axial direction. A linear stability analysis is performed on these base profiles to study their stability to axial perturbations. The destabilizing behavior of aspect ratio and surface tension is demonstrated which is attributed to capillary instability. The transient growth that gives maximum amplification of any initial disturbance and the pseudospectra of the stability operator are computed. These computations reveal weak effect of non-normality of the operator and the results of eigenvalue analysis are recovered after a brief transient period. Results from nonlinear simulations are also presented which also confirm the validity of the modal analysis for the flow considered in this study. PMID:26496740
General, database-driven fast-feedback system for the Stanford Linear Collider
Rouse, F.; Allison, S.; Castillo, S.; Gromme, T.; Hall, B.; Hendrickson, L.; Himel, T.; Krauter, K.; Sass, B.' Shoaee, H.
1991-05-01
A new feedback system has been developed for stabilizing the SLC beams at many locations. The feedback loops are designed to sample and correct at the 60 Hz repetition rate of the accelerator. Each loop can be distributed across several of the standard 80386 microprocessors which control the SLC hardware. A new communications system, KISNet, has been implemented to pass signals between the microprocessors at this rate. The software is written in a general fashion using the state space formalism of digital control theory. This allows a new loop to be implemented by just setting up the online database and perhaps installing a communications link. 3 refs., 4 figs.
General dispersion formulae for atomic third-order non-linear optical properties
NASA Astrophysics Data System (ADS)
Bishop, David M.
1988-12-01
Dispersion formulae for the parallel and perpendicular optical hyperpolarizabilities γ ∥ω=γ xxxx(—ω σ;ω 1,ω 2ω 3) and γ ·ω =γ xzzx(—ω σ;ω 1,ω 2,ω 3), where ω σ=ω 1+ω 2+ω 3, are (for atoms): γ ∥ω/γ ∥0=1+ Aω L2+ O(ω 4),γ ·ω/γ ·0=1+ Bω L2+ O(ω 4), 1/3γ ∥ω/γ ·ω=1+ Cω L2+ O(ω 4), where A is independent of the process, B is proportional to 1+ az where z is independent of the process and a=(ω σω 3—ω 1ω 2)/ω L2, C is proportional to 1-6 a, and ω L2=ω σ2+ω 12+ω 22+ω 32. The coefficients A, B and C are related by C= A— B. These results are more general than those previously reported and asymptotically exact for low frequencies.
Biohybrid Control of General Linear Systems Using the Adaptive Filter Model of Cerebellum
Wilson, Emma D.; Assaf, Tareq; Pearson, Martin J.; Rossiter, Jonathan M.; Dean, Paul; Anderson, Sean R.; Porrill, John
2015-01-01
The adaptive filter model of the cerebellar microcircuit has been successfully applied to biological motor control problems, such as the vestibulo-ocular reflex (VOR), and to sensory processing problems, such as the adaptive cancelation of reafferent noise. It has also been successfully applied to problems in robotics, such as adaptive camera stabilization and sensor noise cancelation. In previous applications to inverse control problems, the algorithm was applied to the velocity control of a plant dominated by viscous and elastic elements. Naive application of the adaptive filter model to the displacement (as opposed to velocity) control of this plant results in unstable learning and control. To be more generally useful in engineering problems, it is essential to remove this restriction to enable the stable control of plants of any order. We address this problem here by developing a biohybrid model reference adaptive control (MRAC) scheme, which stabilizes the control algorithm for strictly proper plants. We evaluate the performance of this novel cerebellar-inspired algorithm with MRAC scheme in the experimental control of a dielectric electroactive polymer, a class of artificial muscle. The results show that the augmented cerebellar algorithm is able to accurately control the displacement response of the artificial muscle. The proposed solution not only greatly extends the practical applicability of the cerebellar-inspired algorithm, but may also shed light on cerebellar involvement in a wider range of biological control tasks. PMID:26257638
Jiang, Fei; Ma, Yanyuan; Wang, Yuanjia
2015-01-01
We propose a generalized partially linear functional single index risk score model for repeatedly measured outcomes where the index itself is a function of time. We fuse the nonparametric kernel method and regression spline method, and modify the generalized estimating equation to facilitate estimation and inference. We use local smoothing kernel to estimate the unspecified coefficient functions of time, and use B-splines to estimate the unspecified function of the single index component. The covariance structure is taken into account via a working model, which provides valid estimation and inference procedure whether or not it captures the true covariance. The estimation method is applicable to both continuous and discrete outcomes. We derive large sample properties of the estimation procedure and show different convergence rate of each component of the model. The asymptotic properties when the kernel and regression spline methods are combined in a nested fashion has not been studied prior to this work even in the independent data case. PMID:26283801
NASA Astrophysics Data System (ADS)
Cedeño M, C. E.; de Araujo, J. C. N.
2016-05-01
A study of binary systems composed of two point particles with different masses in the linear regime of the characteristic formulation of general relativity with a Minkowski background is provided. The present paper generalizes a previous study by Bishop et al. The boundary conditions at the world tubes generated by the particles's orbits are explored, where the metric variables are decomposed in spin-weighted spherical harmonics. The power lost by the emission of gravitational waves is computed using the Bondi News function. The power found is the well-known result obtained by Peters and Mathews using a different approach. This agreement validates the approach considered here. Several multipole term contributions to the gravitational radiation field are also shown.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
The generalized cross-validation method applied to geophysical linear traveltime tomography
NASA Astrophysics Data System (ADS)
Bassrei, A.; Oliveira, N. P.
2009-12-01
The oil industry is the major user of Applied Geophysics methods for the subsurface imaging. Among different methods, the so-called seismic (or exploration seismology) methods are the most important. Tomography was originally developed for medical imaging and was introduced in exploration seismology in the 1980's. There are two main classes of geophysical tomography: those that use only the traveltimes between sources and receivers, which is a cinematic approach and those that use the wave amplitude itself, being a dynamic approach. Tomography is a kind of inverse problem, and since inverse problems are usually ill-posed, it is necessary to use some method to reduce their deficiencies. These difficulties of the inverse procedure are associated with the fact that the involved matrix is ill-conditioned. To compensate this shortcoming, it is appropriate to use some technique of regularization. In this work we make use of regularization with derivative matrices, also called smoothing. There is a crucial problem in regularization, which is the selection of the regularization parameter lambda. We use generalized cross validation (GCV) as a tool for the selection of lambda. GCV chooses the regularization parameter associated with the best average prediction for all possible omissions of one datum, corresponding to the minimizer of GCV function. GCV is used for an application in traveltime tomography, where the objective is to obtain the 2-D velocity distribution from the measured values of the traveltimes between sources and receivers. We present results with synthetic data, using a geological model that simulates different features, like a fault and a reservoir. The results using GCV are very good, including those contaminated with noise, and also using different regularization orders, attesting the feasibility of this technique.
Iwasaki, Yuichi; Brinkman, Stephen F
2015-04-01
Increased concerns about the toxicity of chemical mixtures have led to greater emphasis on analyzing the interactions among the mixture components based on observed effects. The authors applied a generalized linear mixed model (GLMM) to analyze survival of brown trout (Salmo trutta) acutely exposed to metal mixtures that contained copper and zinc. Compared with dominant conventional approaches based on an assumption of concentration addition and the concentration of a chemical that causes x% effect (ECx), the GLMM approach has 2 major advantages. First, binary response variables such as survival can be modeled without any transformations, and thus sample size can be taken into consideration. Second, the importance of the chemical interaction can be tested in a simple statistical manner. Through this application, the authors investigated whether the estimated concentration of the 2 metals binding to humic acid, which is assumed to be a proxy of nonspecific biotic ligand sites, provided a better prediction of survival effects than dissolved and free-ion concentrations of metals. The results suggest that the estimated concentration of metals binding to humic acid is a better predictor of survival effects, and thus the metal competition at the ligands could be an important mechanism responsible for effects of metal mixtures. Application of the GLMM (and the generalized linear model) presents an alternative or complementary approach to analyzing mixture toxicity. PMID:25524054
NASA Astrophysics Data System (ADS)
Xiong, Y. P.; Xing, J. T.; Price, W. G.
2003-10-01
Generalized integrated structure-control dynamical systems consisting of any number of active/passive controllers and three-dimensional rigid/flexible substructures are investigated. The developed mathematical model assessing the behaviour of these complex systems includes description of general boundary conditions, the interaction mechanisms between structures, power flows and control characteristics. Three active control strategies are examined. That is, multiple channel absolute/relative velocity feedback controllers, their hybrid combination and an existing passive control system to which the former control systems are attached in order to improve overall control efficiency. From the viewpoint of continuum mechanics, an analytical solution of this generalized structure-control system has been developed allowing predictions of the dynamic responses at any point on or in substructures of the coupled system. Absolute or relative dynamic response or receptance, transmissibility, mobility, transfer functions have been derived to evaluate complex dynamic interaction mechanisms through various transmission paths. The instantaneous and time-averaged power flow of energy input, transmission and dissipation or absorption within and between the source substructure, control subsystems and controlled substructure are presented. The general theory developed provides an integrated framework to solve various vibration isolation and control problems and provides a basis to develop a general algorithm that may allow the user to build arbitrarily complex linear control models using simple commands and inputs. The proposed approach is applied to a practical example to illustrate and validate the mathematical model as well as to assess control effectiveness and to provide important guidelines to assist vibration control designers.
NASA Astrophysics Data System (ADS)
Edwards, C. L.; Edwards, M. L.
2009-05-01
MEMS micro-mirror technology offers the opportunity to replace larger optical actuators with smaller, faster ones for lidar, network switching, and other beam steering applications. Recent developments in modeling and simulation of MEMS two-axis (tip-tilt) mirrors have resulted in closed-form solutions that are expressed in terms of physical, electrical and environmental parameters related to the MEMS device. The closed-form analytical expressions enable dynamic time-domain simulations without excessive computational overhead and are referred to as the Micro-mirror Pointing Model (MPM). Additionally, these first-principle models have been experimentally validated with in-situ static, dynamic, and stochastic measurements illustrating their reliability. These models have assumed that the mirror has a rectangular shape. Because the corners can limit the dynamic operation of a rectangular mirror, it is desirable to shape the mirror, e.g., mitering the corners. Presented in this paper is the formulation of a generalized electrostatic micromirror (GEM) model with an arbitrary convex piecewise linear shape that is readily implemented in MATLAB and SIMULINK for steady-state and dynamic simulations. Additionally, such a model permits an arbitrary shaped mirror to be approximated as a series of linearly tapered segments. Previously, "effective area" arguments were used to model a non-rectangular shaped mirror with an equivalent rectangular one. The GEM model shows the limitations of this approach and provides a pre-fabrication tool for designing mirror shapes.
Lieh Yeh, Tzung; Liang Huang, Chieh; Kuang Yang, Yen; Dar Lee, Yih; Cheng Chen, Chwen; See Chen, Po
2004-08-01
Although generalized anxiety disorder (GAD) is associated with significant occupational disability, it has, however, received little attention with regard to adjustment to illness. Subjects included 102 chronic dialysis (CD) patients, 58 kidney transplant (KT) patients, and 42 GAD patients. The evaluations included the Psychosocial Adjustment to Physical Illness Scale (PAIS), the Hamilton Anxiety Rating Scale (HAM-A) and the Hamilton Depression Rating Scale (HAM-D). Preanxiolytic treatment GAD patients had the most anxiety and depressive symptoms, followed by CD patients and KT patients. KT patients and anxiolytic-treated GAD patients showed similar anxiety and depressive symptoms. These two groups were both better than CD patients. However, the adjustment to illness of GAD patients after treatment is still worse than the other two groups (108.0+/-16.3(GAD), 102.0+/-14.5(CD), 81.4+/-22.2(KT); P<.001). The CD patients had a high rate of psychiatric morbidity and a low rate of psychiatric intervention (3%); however, end-stage renal disease (ESRD) patients received only one assessment while the GAD group received two in this study. In light of the chronicity of GAD, pharmacological treatment is not sufficient by itself. Clinicians should keep these in mind when treating either GAD or ESRD.
Chen, Vivian Yi-Ju; Yang, Tse-Chuan
2012-08-01
An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above.
Wang, Tao; He, Peng; Ahn, Kwang Woo; Wang, Xujing; Ghosh, Soumitra; Laud, Purushottam
2015-01-01
The generalized linear mixed model (GLMM) is a useful tool for modeling genetic correlation among family data in genetic association studies. However, when dealing with families of varied sizes and diverse genetic relatedness, the GLMM has a special correlation structure which often makes it difficult to be specified using standard statistical software. In this study, we propose a Cholesky decomposition based re-formulation of the GLMM so that the re-formulated GLMM can be specified conveniently via “proc nlmixed” and “proc glimmix” in SAS, or OpenBUGS via R package BRugs. Performances of these procedures in fitting the re-formulated GLMM are examined through simulation studies. We also apply this re-formulated GLMM to analyze a real data set from Type 1 Diabetes Genetics Consortium (T1DGC). PMID:25873936
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
NASA Astrophysics Data System (ADS)
Rio, Daniel; Rawlings, Robert; Woltz, Lawrence; Gilman, Jodi; Hommer, Daniel
2009-02-01
The general linear model (GLM) has been extensively applied to fMRI data in the time domain. However, traditionally time series data can be analyzed in the Fourier domain where the assumptions made as to the noise in the signal can be less restrictive and statistical tests are mathematically more rigorous. A complex form of the GLM in the Fourier domain has been applied to the analysis of fMRI (BOLD) data. This methodology has a number of advantages over temporal methods: 1. Noise in the fMRI data is modeled more generally and closer to that actually seen in the data. 2. Any input function is allowed regardless of the timing. 3. Non-parametric estimation of the transfer functions at each voxel are possible. 4. Rigorous statistical inference of single subjects is possible. This is demonstrated in the analysis of an experimental design with random exponentially distributed stimulus inputs (a two way ANOVA design with input stimuli images of alcohol, non-alcohol beverage and positive or negative images) sampled at 400 milliseconds. This methodology applied to a pair of subjects showed precise and interesting results (e.g. alcoholic beverage images attenuate the response of negative images in an alcoholic as compared to a control subject).
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
NASA Astrophysics Data System (ADS)
Lebovka, Nikolai I.; Tarasevich, Yuri Yu.; Dubinin, Dmitri O.; Laptev, Valeri V.; Vygornitskii, Nikolai V.
2015-12-01
The jamming and percolation for two generalized models of random sequential adsorption (RSA) of linear k -mers (particles occupying k adjacent sites) on a square lattice are studied by means of Monte Carlo simulation. The classical RSA model assumes the absence of overlapping of the new incoming particle with the previously deposited ones. The first model is a generalized variant of the RSA model for both k -mers and a lattice with defects. Some of the occupying k adjacent sites are considered as insulating and some of the lattice sites are occupied by defects (impurities). For this model even a small concentration of defects can inhibit percolation for relatively long k -mers. The second model is the cooperative sequential adsorption one where, for each new k -mer, only a restricted number of lateral contacts z with previously deposited k -mers is allowed. Deposition occurs in the case when z ≤(1 -d ) zm where zm=2 (k +1 ) is the maximum numbers of the contacts of k -mer, and d is the fraction of forbidden contacts. Percolation is observed only at some interval kmin≤k ≤kmax where the values kmin and kmax depend upon the fraction of forbidden contacts d . The value kmax decreases as d increases. A logarithmic dependence of the type log10(kmax) =a +b d , where a =4.04 ±0.22 ,b =-4.93 ±0.57 , is obtained.
[Structural adjustment, cultural adjustment?].
Dujardin, B; Dujardin, M; Hermans, I
2003-12-01
Over the last two decades, multiple studies have been conducted and many articles published about Structural Adjustment Programmes (SAPs). These studies mainly describe the characteristics of SAPs and analyse their economic consequences as well as their effects upon a variety of sectors: health, education, agriculture and environment. However, very few focus on the sociological and cultural effects of SAPs. Following a summary of SAP's content and characteristics, the paper briefly discusses the historical course of SAPs and the different critiques which have been made. The cultural consequences of SAPs are introduced and are described on four different levels: political, community, familial, and individual. These levels are analysed through examples from the literature and individual testimonies from people in the Southern Hemisphere. The paper concludes that SAPs, alongside economic globalisation processes, are responsible for an acute breakdown of social and cultural structures in societies in the South. It should be a priority, not only to better understand the situation and its determining factors, but also to intervene and act with strategies that support and reinvest in the social and cultural sectors, which is vital in order to allow for individuals and communities in the South to strengthen their autonomy and identify.
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-01-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory works. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the resolution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small. The model also reproduces fairly well the polar ozone variability, with notably the formation of "ozone holes" in the southern hemisphere with amplitudes and seasonal evolutions that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone contents inside the polar vortex of the southern hemisphere over longer periods in spring time. It is concluded that for the study of climatic scenarios or the assimilation of ozone data, the present
Lebovka, Nikolai I; Tarasevich, Yuri Yu; Dubinin, Dmitri O; Laptev, Valeri V; Vygornitskii, Nikolai V
2015-12-01
The jamming and percolation for two generalized models of random sequential adsorption (RSA) of linear k-mers (particles occupying k adjacent sites) on a square lattice are studied by means of Monte Carlo simulation. The classical RSA model assumes the absence of overlapping of the new incoming particle with the previously deposited ones. The first model is a generalized variant of the RSA model for both k-mers and a lattice with defects. Some of the occupying k adjacent sites are considered as insulating and some of the lattice sites are occupied by defects (impurities). For this model even a small concentration of defects can inhibit percolation for relatively long k-mers. The second model is the cooperative sequential adsorption one where, for each new k-mer, only a restricted number of lateral contacts z with previously deposited k-mers is allowed. Deposition occurs in the case when z≤(1-d)z(m) where z(m)=2(k+1) is the maximum numbers of the contacts of k-mer, and d is the fraction of forbidden contacts. Percolation is observed only at some interval k(min)≤k≤k(max) where the values k(min) and k(max) depend upon the fraction of forbidden contacts d. The value k(max) decreases as d increases. A logarithmic dependence of the type log(10)(k(max))=a+bd, where a=4.04±0.22,b=-4.93±0.57, is obtained. PMID:26764641
Vock, David M; Davidian, Marie; Tsiatis, Anastasios A
2014-01-01
Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time.
NASA Astrophysics Data System (ADS)
Asong, Zilefac E.; Khaliq, M. N.; Wheater, H. S.
2016-02-01
Based on the Generalized Linear Model (GLM) framework, a multisite stochastic modelling approach is developed using daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. Temperature is modeled using a two-stage normal-heteroscedastic model by fitting mean and variance components separately. Likewise, precipitation occurrence and conditional precipitation intensity processes are modeled separately. The relationship between precipitation and temperature is accounted for by using transformations of precipitation as covariates to predict temperature fields. Large scale atmospheric covariates from the National Center for Environmental Prediction Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate these models for the 1971-2000 period. Validation of the developed models is performed on both pre- and post-calibration period data. Results of the study indicate that the developed models are able to capture spatiotemporal characteristics of observed precipitation and temperature fields, such as inter-site and inter-variable correlation structure, and systematic regional variations present in observed sequences. A number of simulated weather statistics ranging from seasonal means to characteristics of temperature and precipitation extremes and some of the commonly used climate indices are also found to be in close agreement with those derived from observed data. This GLM-based modelling approach will be developed further for multisite statistical downscaling of Global Climate Model outputs to explore climate variability and change in this region of Canada.
Vock, David M; Davidian, Marie; Tsiatis, Anastasios A
2014-01-01
Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time. PMID:24688453
Vock, David M.; Davidian, Marie; Tsiatis, Anastasios A.
2014-01-01
Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time. PMID:24688453
Monda, D.P.; Galat, D.L.; Finger, S.E.; Kaiser, M.S.
1995-01-01
Toxicity of un-ionized ammonia (NH3-N) to the midge, Chironomus riparius was compared, using laboratory culture (well) water and sewage effluent (≈0.4 mg/L NH3-N) in two 96-h, static-renewal toxicity experiments. A generalized linear model was used for data analysis. For the first and second experiments, respectively, LC50 values were 9.4 mg/L (Test 1A) and 6.6 mg/L (Test 2A) for ammonia in well water, and 7.8 mg/L (Test 1B) and 4.1 mg/L (Test 2B) for ammonia in sewage effluent. Slopes of dose-response curves for Tests 1A and 2A were equal, but mortality occurred at lower NH3-N concentrations in Test 2A (unequal intercepts). Response ofC. riparius to NH3 in effluent was not consistent; dose-response curves for tests 1B and 2B differed in slope and intercept. Nevertheless, C. riparius was more sensitive to ammonia in effluent than in well water in both experiments, indicating a synergistic effect of ammonia in sewage effluent. These results demonstrate the advantages of analyzing the organisms entire range of response, as opposed to generating LC50 values, which represent only one point on the dose-response curve.
Tian, Fenghua; Liu, Hanli
2013-01-01
One of the main challenges in functional diffuse optical tomography (DOT) is to accurately recover the depth of brain activation, which is even more essential when differentiating true brain signals from task-evoked artifacts in the scalp. Recently, we developed a depth-compensated algorithm (DCA) to minimize the depth localization error in DOT. However, the semi-infinite model that was used in DCA deviated significantly from the realistic human head anatomy. In the present work, we incorporated depth-compensated DOT (DC-DOT) with a standard anatomical atlas of human head. Computer simulations and human measurements of sensorimotor activation were conducted to examine and prove the depth specificity and quantification accuracy of brain atlas-based DC-DOT. In addition, node-wise statistical analysis based on the general linear model (GLM) was also implemented and performed in this study, showing the robustness of DC-DOT that can accurately identify brain activation at the correct depth for functional brain imaging, even when co-existing with superficial artifacts. PMID:23859922
Huang, A.B.; Yortsos, Y.C.
1984-09-01
This paper reports on the continuation of previous work in the linear stability of immiscible, two-phase flow displacement processes in porous media that includes continuously changing mobility and capillary effects. In Part I simple basic-flow profiles that allow exact solutions to be obtained were investigated. First, the stability of non-capillary flows corresponding to a straight line fractional flow is examined. Next, the stability of capillary flows for general basic flow profiles is examined. For values of the viscosity ratio above the critical, the numerical results show that the displacement is unstable to small disturbances of wavelength larger than a critical value, and stable otherwise. This effect is attributed to the stabilizing action of capillarity. Values of wavelength corresponding to the highest rate of growth are numerically determined. It is found that stability is enhanced at lower values of the capillary number and the injection rate. Finally, a limited sensitivity study of the effect on stability of the functional forms of relative permeability and capillary pressure is carried out.
Mendes, T M; Guimarães-Okamoto, P T C; Machado-de-Avila, R A; Oliveira, D; Melo, M M; Lobato, Z I; Kalapothakis, E; Chávez-Olórtegui, C
2015-06-01
This communication describes the general characteristics of the venom from the Brazilian scorpion Tityus fasciolatus, which is an endemic species found in the central Brazil (States of Goiás and Minas Gerais), being responsible for sting accidents in this area. The soluble venom obtained from this scorpion is toxic to mice being the LD50 is 2.984 mg/kg (subcutaneally). SDS-PAGE of the soluble venom resulted in 10 fractions ranged in size from 6 to 10-80 kDa. Sheep were employed for anti-T. fasciolatus venom serum production. Western blotting analysis showed that most of these venom proteins are immunogenic. T. fasciolatus anti-venom revealed consistent cross-reactivity with venom antigens from Tityus serrulatus. Using known primers for T. serrulatus toxins, we have identified three toxins sequences from T. fasciolatus venom. Linear epitopes of these toxins were localized and fifty-five overlapping pentadecapeptides covering complete amino acid sequence of the three toxins were synthesized in cellulose membrane (spot-synthesis technique). The epitopes were located on the 3D structures and some important residues for structure/function were identified.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy.
Pernet, Cyril R.
2014-01-01
This tutorial presents several misconceptions related to the use the General Linear Model (GLM) in functional Magnetic Resonance Imaging (fMRI). The goal is not to present mathematical proofs but to educate using examples and computer code (in Matlab). In particular, I address issues related to (1) model parameterization (modeling baseline or null events) and scaling of the design matrix; (2) hemodynamic modeling using basis functions, and (3) computing percentage signal change. Using a simple controlled block design and an alternating block design, I first show why “baseline” should not be modeled (model over-parameterization), and how this affects effect sizes. I also show that, depending on what is tested; over-parameterization does not necessarily impact upon statistical results. Next, using a simple periodic vs. random event related design, I show how the hemodynamic model (hemodynamic function only or using derivatives) can affects parameter estimates, as well as detail the role of orthogonalization. I then relate the above results to the computation of percentage signal change. Finally, I discuss how these issues affect group analyses and give some recommendations. PMID:24478622
NASA Astrophysics Data System (ADS)
de Souza, R. S.; Hilbe, J. M.; Buelens, B.; Riggs, J. D.; Cameron, E.; Ishida, E. E. O.; Chies-Santos, A. L.; Killedar, M.
2015-10-01
In this paper, the third in a series illustrating the power of generalized linear models (GLMs) for the astronomical community, we elucidate the potential of the class of GLMs which handles count data. The size of a galaxy's globular cluster (GC) population (NGC) is a prolonged puzzle in the astronomical literature. It falls in the category of count data analysis, yet it is usually modelled as if it were a continuous response variable. We have developed a Bayesian negative binomial regression model to study the connection between NGC and the following galaxy properties: central black hole mass, dynamical bulge mass, bulge velocity dispersion and absolute visual magnitude. The methodology introduced herein naturally accounts for heteroscedasticity, intrinsic scatter, errors in measurements in both axes (either discrete or continuous) and allows modelling the population of GCs on their natural scale as a non-negative integer variable. Prediction intervals of 99 per cent around the trend for expected NGC comfortably envelope the data, notably including the Milky Way, which has hitherto been considered a problematic outlier. Finally, we demonstrate how random intercept models can incorporate information of each particular galaxy morphological type. Bayesian variable selection methodology allows for automatically identifying galaxy types with different productions of GCs, suggesting that on average S0 galaxies have a GC population 35 per cent smaller than other types with similar brightness.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582
NASA Astrophysics Data System (ADS)
Asong, Z. E.; Khaliq, M. N.; Wheater, H. S.
2016-08-01
In this study, a multisite multivariate statistical downscaling approach based on the Generalized Linear Model (GLM) framework is developed to downscale daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. First, large scale atmospheric covariates from the National Center for Environmental Prediction (NCEP) Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate GLMs for the 1971-2000 period. Then the calibrated models are used to generate daily sequences of precipitation and temperature for the 1962-2005 historical (conditioned on NCEP predictors), and future period (2006-2100) using outputs from five CMIP5 (Coupled Model Intercomparison Project Phase-5) Earth System Models corresponding to Representative Concentration Pathway (RCP): RCP2.6, RCP4.5, and RCP8.5 scenarios. The results indicate that the fitted GLMs are able to capture spatiotemporal characteristics of observed precipitation and temperature fields. According to the downscaled future climate, mean precipitation is projected to increase in summer and decrease in winter while minimum temperature is expected to warm faster than the maximum temperature. Climate extremes are projected to intensify with increased radiative forcing.
Mendes, T M; Guimarães-Okamoto, P T C; Machado-de-Avila, R A; Oliveira, D; Melo, M M; Lobato, Z I; Kalapothakis, E; Chávez-Olórtegui, C
2015-06-01
This communication describes the general characteristics of the venom from the Brazilian scorpion Tityus fasciolatus, which is an endemic species found in the central Brazil (States of Goiás and Minas Gerais), being responsible for sting accidents in this area. The soluble venom obtained from this scorpion is toxic to mice being the LD50 is 2.984 mg/kg (subcutaneally). SDS-PAGE of the soluble venom resulted in 10 fractions ranged in size from 6 to 10-80 kDa. Sheep were employed for anti-T. fasciolatus venom serum production. Western blotting analysis showed that most of these venom proteins are immunogenic. T. fasciolatus anti-venom revealed consistent cross-reactivity with venom antigens from Tityus serrulatus. Using known primers for T. serrulatus toxins, we have identified three toxins sequences from T. fasciolatus venom. Linear epitopes of these toxins were localized and fifty-five overlapping pentadecapeptides covering complete amino acid sequence of the three toxins were synthesized in cellulose membrane (spot-synthesis technique). The epitopes were located on the 3D structures and some important residues for structure/function were identified. PMID:25817000
2010-01-01
Background Near-infrared spectroscopy (NIRS) is a non-invasive neuroimaging technique that recently has been developed to measure the changes of cerebral blood oxygenation associated with brain activities. To date, for functional brain mapping applications, there is no standard on-line method for analysing NIRS data. Methods In this paper, a novel on-line NIRS data analysis framework taking advantages of both the general linear model (GLM) and the Kalman estimator is devised. The Kalman estimator is used to update the GLM coefficients recursively, and one critical coefficient regarding brain activities is then passed to a t-statistical test. The t-statistical test result is used to update a topographic brain activation map. Meanwhile, a set of high-pass filters is plugged into the GLM to prevent very low-frequency noises, and an autoregressive (AR) model is used to prevent the temporal correlation caused by physiological noises in NIRS time series. A set of data recorded in finger tapping experiments is studied using the proposed framework. Results The obtained results suggest that the method can effectively track the task related brain activation areas, and prevent the noise distortion in the estimation while the experiment is running. Thereby, the potential of the proposed method for real-time NIRS-based brain imaging was demonstrated. Conclusions This paper presents a novel on-line approach for analysing NIRS data for functional brain mapping applications. This approach demonstrates the potential of a real-time-updating topographic brain activation map. PMID:21138595
NASA Astrophysics Data System (ADS)
Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.
2012-05-01
The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).
NASA Astrophysics Data System (ADS)
Cariolle, D.; Teyssèdre, H.
2007-05-01
This article describes the validation of a linear parameterization of the ozone photochemistry for use in upper tropospheric and stratospheric studies. The present work extends a previously developed scheme by improving the 2-D model used to derive the coefficients of the parameterization. The chemical reaction rates are updated from a compilation that includes recent laboratory work. Furthermore, the polar ozone destruction due to heterogeneous reactions at the surface of the polar stratospheric clouds is taken into account as a function of the stratospheric temperature and the total chlorine content. Two versions of the parameterization are tested. The first one only requires the solution of a continuity equation for the time evolution of the ozone mixing ratio, the second one uses one additional equation for a cold tracer. The parameterization has been introduced into the chemical transport model MOCAGE. The model is integrated with wind and temperature fields from the ECMWF operational analyses over the period 2000-2004. Overall, the results from the two versions show a very good agreement between the modelled ozone distribution and the Total Ozone Mapping Spectrometer (TOMS) satellite data and the "in-situ" vertical soundings. During the course of the integration the model does not show any drift and the biases are generally small, of the order of 10%. The model also reproduces fairly well the polar ozone variability, notably the formation of "ozone holes" in the Southern Hemisphere with amplitudes and a seasonal evolution that follow the dynamics and time evolution of the polar vortex. The introduction of the cold tracer further improves the model simulation by allowing additional ozone destruction inside air masses exported from the high to the mid-latitudes, and by maintaining low ozone content inside the polar vortex of the Southern Hemisphere over longer periods in spring time. It is concluded that for the study of climate scenarios or the assimilation of
Heydari, Shahram; Miranda-Moreno, Luis F; Liping, Fu
2014-12-01
In fall 2009, a new speed limit of 40 km/h was introduced on local streets in Montreal (previous speed limit: 50 km/h). This paper proposes a methodology to efficiently estimate the effect of such reduction on speeding behaviors. We employ a full Bayes before-after approach, which overcomes the limitations of the empirical Bayes method. The proposed methodology allows for the analysis of speed data using hourly observations. Therefore, the entire daily profile of speed is considered. Furthermore, it accounts for the entire distribution of speed in contrast to the traditional approach of considering only a point estimate such as 85th percentile speed. Different reference speeds were used to examine variations in the treatment effectiveness in terms of speeding rate and frequency. In addition to comparing rates of vehicles exceeding reference speeds of 40 km/h and 50 km/h (speeding), we verified how the implemented treatment affected "excessive speeding" behaviors (exceeding 80 km/h). To model operating speeds, two Bayesian generalized mixed linear models were utilized. These models have the advantage of addressing the heterogeneity problem in observations and efficiently capturing potential intra-site correlations. A variety of site characteristics, temporal variables, and environmental factors were considered. The analyses indicated that variables such as lane width and night hour had an increasing effect on speeding. Conversely, roadside parking had a decreasing effect on speeding. One-way and lane width had an increasing effect on excessive speeding, whereas evening hour had a decreasing effect. This study concluded that although the treatment was effective with respect to speed references of 40 km/h and 50 km/h, its effectiveness was not significant with respect to excessive speeding-which carries a great risk to pedestrians and cyclists in urban areas. Therefore, caution must be taken in drawing conclusions about the effectiveness of speed limit reduction. This
ERIC Educational Resources Information Center
Chen, Haiwen
2012-01-01
In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali
2015-01-01
This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…
ERIC Educational Resources Information Center
Bashaw, W. L., Ed.; Findley, Warren G., Ed.
This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
NASA Astrophysics Data System (ADS)
Sharma, S.; Narayan, A.
2001-06-01
The non-linear oscillation of inter-connected satellites system about its equilibrium position in the neighabourhood of main resonance ??=3D 1, under the combined effects of the solar radiation pressure and the dissipative forces of general nature has been discussed. It is found that the oscillation of the system gets disturbed when the frequency of the natural oscillation approaches the resonance frequency.
7 CFR 251.7 - Formula adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false Formula adjustments. 251.7 Section 251.7 Agriculture... GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION THE EMERGENCY FOOD ASSISTANCE PROGRAM § 251.7 Formula adjustments. Formula adjustments. (a) Commodity adjustments. The Department will make annual adjustments...
Kuhls-Gilcrist, Andrew T.; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen
2010-01-01
The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks. PMID:21243038
NASA Astrophysics Data System (ADS)
Tsuboi, Zengo
2013-05-01
In [1] (Z. Tsuboi, Nucl. Phys. B 826 (2010) 399, arxiv:arXiv:0906.2039), we proposed Wronskian-like solutions of the T-system for [ M , N ]-hook of the general linear superalgebra gl (M | N). We have generalized these Wronskian-like solutions to the ones for the general T-hook, which is a union of [M1 ,N1 ]-hook and [M2 ,N2 ]-hook (M =M1 +M2, N =N1 +N2). These solutions are related to Weyl-type supercharacter formulas of infinite dimensional unitarizable modules of gl (M | N). Our solutions also include a Wronskian-like solution discussed in [2] (N. Gromov, V. Kazakov, S. Leurent, Z. Tsuboi, JHEP 1101 (2011) 155, arxiv:arXiv:1010.2720) in relation to the AdS5 /CFT4 spectral problem.
NASA Technical Reports Server (NTRS)
Ustino, Eugene A.
2006-01-01
This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1987-01-01
During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Akyuz, F. A.; Heer, E.
1972-01-01
This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.
Harry, H.H.
1988-03-11
Abstract and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus. 3 figs.
Harry, Herbert H.
1989-01-01
Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.
Lipparini, Filippo; Scalmani, Giovanni; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Frisch, Michael J; Mennucci, Benedetta
2014-11-14
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute. PMID:25399133
Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta
2014-11-14
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.
NASA Astrophysics Data System (ADS)
Lipparini, Filippo; Scalmani, Giovanni; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Frisch, Michael J.; Mennucci, Benedetta
2014-11-01
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felicio Bruzzi
2016-07-01
Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felicio Bruzzi
2016-11-01
Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.
NASA Astrophysics Data System (ADS)
Harko, T.; Mak, M. K.
2016-09-01
Obtaining exact solutions of the spherically symmetric general relativistic gravitational field equations describing the interior structure of an isotropic fluid sphere is a long standing problem in theoretical and mathematical physics. The usual approach to this problem consists mainly in the numerical investigation of the Tolman-Oppenheimer-Volkoff and of the mass continuity equations, which describes the hydrostatic stability of the dense stars. In the present paper we introduce an alternative approach for the study of the relativistic fluid sphere, based on the relativistic mass equation, obtained by eliminating the energy density in the Tolman-Oppenheimer-Volkoff equation. Despite its apparent complexity, the relativistic mass equation can be solved exactly by using a power series representation for the mass, and the Cauchy convolution for infinite power series. We obtain exact series solutions for general relativistic dense astrophysical objects described by the linear barotropic and the polytropic equations of state, respectively. For the polytropic case we obtain the exact power series solution corresponding to arbitrary values of the polytropic index n. The explicit form of the solution is presented for the polytropic index n=1, and for the indexes n=1/2 and n=1/5, respectively. The case of n=3 is also considered. In each case the exact power series solution is compared with the exact numerical solutions, which are reproduced by the power series solutions truncated to seven terms only. The power series representations of the geometric and physical properties of the linear barotropic and polytropic stars are also obtained.
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
19 CFR 201.205 - Salary adjustments.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Salary adjustments. 201.205 Section 201.205 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Debt Collection § 201.205 Salary adjustments. Any negative adjustment to pay arising out of an employee's...
NASA Astrophysics Data System (ADS)
Ramos, Tomás; Rubilar, Guillermo F.; Obukhov, Yuri N.
2015-02-01
We study the problem of the definition of the energy-momentum tensor of light in general moving non-dispersive media with linear constitutive law. Using the basic principles of classical field theory, we show that for the correct understanding of the problem, one needs to carefully distinguish situations when the material medium is modeled either as a background on which light propagates or as a dynamical part of the total system. In the former case, we prove that the (generalized) Belinfante-Rosenfeld (BR) tensor for the electromagnetic field coincides with the Minkowski tensor. We derive a complete set of balance equations for this open system and show that the symmetries of the background medium are directly related to the conservation of the Minkowski quantities. In particular, for isotropic media, the angular momentum of light is conserved despite of the fact that the Minkowski tensor is non-symmetric. For the closed system of light interacting with matter, we model the material medium as a relativistic non-dissipative fluid and we prove that it is always possible to express the total BR tensor of the closed system either in the Abraham or in the Minkowski separation. However, in the case of dynamical media, the balance equations have a particularly convenient form in terms of the Abraham tensor. Our results generalize previous attempts and provide a first principles basis for a unified understanding of the long-standing Abraham-Minkowski controversy without ad hoc arguments.
NASA Astrophysics Data System (ADS)
Grimault, S.; Lucas, T.; Quellec, S.; Mariette, F.
2004-09-01
MRI thermometry methods are usually based on the temperature dependence of the proton resonance frequency. Unfortunately, these methods are very sensitive to the phase drift induced by the instability of the scanner which prevents any temperature mapping over long periods of time. A general method based on 3D spatial modelling of the phase drift as a function of time is presented. The MRI temperature measurements were validated on gel samples with uniform and constant temperature and with a linear temperature gradient. In the case of uniform temperature conditions, correction of the phase drift proved to be essential when long periods of acquisition were required, as bias could reach values of up to 200 °C in its absence. The temperature uncertainty measured by MRI was 1.2 °C in average over 290 min. This accuracy is coherent with the requirements for food applications especially when thermocouples are useless.
NASA Astrophysics Data System (ADS)
Vossos, Spyridon; Vossos, Elias
2016-08-01
closed LSTT is reduced, if one RIO has small velocity wrt another RIO. Thus, we have infinite number of closed LSTTs, each one with the corresponding SR theory. In case that we relate accelerated observers with variable metric of spacetime, we have the case of General Relativity (GR). For being that clear, we produce a generalized Schwarzschild metric, which is in accordance with any SR based on this closed complex LSTT and Einstein equations. The application of this kind of transformations to the SR and GR is obvious. But, the results may be applied to any linear space of dimension four endowed with steady or variable metric, whose elements (four- vectors) have spatial part (vector) with Euclidean metric.
Piccardo, Matteo; Bloino, Julien; Barone, Vincenzo
2015-01-01
Models going beyond the rigid-rotor and the harmonic oscillator levels are mandatory for providing accurate theoretical predictions for several spectroscopic properties. Different strategies have been devised for this purpose. Among them, the treatment by perturbation theory of the molecular Hamiltonian after its expansion in power series of products of vibrational and rotational operators, also referred to as vibrational perturbation theory (VPT), is particularly appealing for its computational efficiency to treat medium-to-large systems. Moreover, generalized (GVPT) strategies combining the use of perturbative and variational formalisms can be adopted to further improve the accuracy of the results, with the first approach used for weakly coupled terms, and the second one to handle tightly coupled ones. In this context, the GVPT formulation for asymmetric, symmetric, and linear tops is revisited and fully generalized to both minima and first-order saddle points of the molecular potential energy surface. The computational strategies and approximations that can be adopted in dealing with GVPT computations are pointed out, with a particular attention devoted to the treatment of symmetry and degeneracies. A number of tests and applications are discussed, to show the possibilities of the developments, as regards both the variety of treatable systems and eligible methods. © 2015 Wiley Periodicals, Inc. PMID:26345131
Shevenell, L.A.; Beauchamp, J.J.
1994-11-01
Several waste disposal sites are located on or adjacent to the karstic Maynardville Limestone (Cmn) and the Copper Ridge Dolomite (Ccr) at the Oak Ridge Y-12 Plant. These formations receive contaminants in groundwaters from nearby disposal sites, which can be transported quite rapidly due to the karst flow system. In order to evaluate transport processes through the karst aquifer, the solutional aspects of the formations must be characterized. As one component of this characterization effort, statistical analyses were conducted on the data related to cavities in order to determine if a suitable model could be identified that is capable of predicting the probability of cavity size or distribution in locations for which drilling data are not available. Existing data on the locations (East, North coordinates), depths (and elevations), and sizes of known conduits and other water zones were used in the analyses. Two different models were constructed in the attempt to predict the distribution of cavities in the vicinity of the Y-12 Plant: General Linear Models (GLM), and Logistic Regression Models (LOG). Each of the models attempted was very sensitive to the data set used. Models based on subsets of the full data set were found to do an inadequate job of predicting the behavior of the full data set. The fact that the Ccr and Cmn data sets differ significantly is not surprising considering the hydrogeology of the two formations differs. Flow in the Cmn is generally at elevations between 600 and 950 ft and is dominantly strike parallel through submerged, partially mud-filled cavities with sizes up to 40 ft, but more typically less than 5 ft. Recognized flow in the Ccr is generally above 950 ft elevation, with flow both parallel and perpendicular to geologic strike through conduits, which tend to be large than those on the Cnm, and are often not fully saturated at the shallower depths.
Bennewitz, J; Bögelein, S; Stratz, P; Rodehutscord, M; Piepho, H P; Kjaer, J B; Bessei, W
2014-04-01
Feather pecking and aggressive pecking is a well-known problem in egg production. In the present study, genetic parameters for 4 feather-pecking-related traits were estimated using generalized linear mixed models. The traits were bouts of feather pecking delivered (FPD), bouts of feather pecking received (FPR), bouts of aggressive pecking delivered (APD), and bouts of aggressive pecking received (APR). An F2-design was established from 2 divergent selected founder lines. The lines were selected for low or high feather pecking for 10 generations. The number of F2 hens was 910. They were housed in pens with around 40 birds. Each pen was observed in 21 sessions of 20 min, distributed over 3 consecutive days. An animal model was applied that treated the bouts observed within 20 min as repeated observations. An over-dispersed Poisson distribution was assumed for observed counts and the link function was a log link. The model included a random animal effect, a random permanent environment effect, and a random day-by-hen effect. Residual variance was approximated on the link scale by the delta method. The results showed a heritability around 0.10 on the link scale for FPD and APD and of 0.04 for APR. The heritability of FPR was zero. For all behavior traits, substantial permanent environmental effects were observed. The approximate genetic correlation between FPD and APD (FPD and APR) was 0.81 (0.54). Egg production and feather eating records were collected on the same hens as well and were analyzed with a generalized linear mixed model, assuming a binomial distribution and using a probit link function. The heritability on the link scale for egg production was 0.40 and for feather eating 0.57. The approximate genetic correlation between FPD and egg production was 0.50 and between FPD and feather eating 0.73. Selection might help to reduce feather pecking, but this might result in an unfavorable correlated selection response reducing egg production. Feather eating and
Larson, Michael J; Clayson, Peter E; Keith, Cierra M; Hunt, Isaac J; Hedges, Dawson W; Nielsen, Brent L; Call, Vaughn R A
2016-03-01
Older adults display alterations in neural reflections of conflict-related processing. We examined response times (RTs), error rates, and event-related potential (ERP; N2 and P3 components) indices of conflict adaptation (i.e., congruency sequence effects) a cognitive control process wherein previous-trial congruency influences current-trial performance, along with post-error slowing, correct-related negativity (CRN), error-related negativity (ERN) and error positivity (Pe) amplitudes in 65 healthy older adults and 94 healthy younger adults. Older adults showed generalized slowing, had decreased post-error slowing, and committed more errors than younger adults. Both older and younger adults showed conflict adaptation effects; magnitude of conflict adaptation did not differ by age. N2 amplitudes were similar between groups; younger, but not older, adults showed conflict adaptation effects for P3 component amplitudes. CRN and Pe, but not ERN, amplitudes differed between groups. Data support generalized declines in cognitive control processes in older adults without specific deficits in conflict adaptation.
NASA Astrophysics Data System (ADS)
Paynter, Shayne
Many water resources throughout the world are demonstrating changes in historic water levels. Potential reasons for these changes include climate shifts, anthropogenic alterations or basin urbanization. The focus of this research was threefold: (1) to determine the extent of spatio-temporal changes in regional precipitation patterns, (2) to determine the statistical changes that occur in lakes with urbanizing watersheds, and (3) to develop accurate prediction of trends and lake level return frequencies. To investigate rainfall patterns regionally, appropriate distributions, either gamma or generalized extreme value (GEV), were fitted to variables at a number of rainfall gages utilizing maximum likelihood estimation. The spatial distribution of rainfall variables was found to be quite homogenous within the region in terms of an average annual expectation. Furthermore, the temporal distribution of rainfall variables was found to be stationary with only one gage evidencing a significant trend. In order to study statistical changes of lake water surface levels in urbanizing watersheds, serial changes in time series parameters, autocorrelation and variance were evaluated and a regression model to estimate weekly lake level fluctuations was developed. The following general conclusions about lakes in urbanizing watersheds were reached: (1) The statistical structure of lake level time series is systematically altered and is related to the extent of urbanization, (2) in the absence of other forcing mechanisms, autocorrelation and baseflow appear to decrease, and (3) the presence of wetlands adjacent to lakes can offset the reduction in baseflow. In regards to the third objective, the direction and magnitude of trends in flood and drought stages were estimated and both long-term and short-term flood and drought stage return frequencies were predicted utilizing the generalized extreme value (GEV) distribution with time and starting stage covariates. All of the lakes
Bacheler, N.M.; Hightower, J.E.; Burdick, S.M.; Paramore, L.M.; Buckel, J.A.; Pollock, K.H.
2010-01-01
Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated. ?? 2009 Elsevier B.V.
Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.
2010-01-01
Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.
Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W
2013-01-01
A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.
NASA Astrophysics Data System (ADS)
Fujii, Y.; Nakano, T.; Usui, N.; Matsumoto, S.; Tsujino, H.; Kamachi, M.
2014-12-01
This study develops a strategy for tracing a target water mass, and applies it to analyzing the pathway of the North Pacific Intermediate Water (NPIW) from the subarctic gyre to the northwestern part of the subtropical gyre south of Japan in a simulation of an ocean general circulation model. This strategy estimates the pathway of the water mass that travels from an origin to a destination area during a specific period using a conservation property concerning tangent linear and adjoint models. In our analysis, a large fraction of the low salinity origin water mass of NPIW initially comes from the Okhotsk or Bering Sea, flows through the southeastern side of the Kuril Islands, and is advected to the Mixed Water Region (MWR) by the Oyashio current. It then enters the Kuroshio Extension (KE) at the first KE ridge, and is advected eastward by the KE current. However, it deviates southward from the KE axis around 158°E over the Shatsky Rise, or around 170ºE on the western side of the Emperor Seamount Chain, and enters the subtropical gyre. It is finally transported westward by the recirculation flow. This pathway corresponds well to the shortcut route of NPIW from MWR to the region south of Japan inferred from analysis of the long-term freshening trend of NPIW observation.
Matas, Marina; Picornell, Antònia; Cifuentes, Carmen; Payeras, Antoni; Bassa, Antoni; Homar, Francesc; González-Candelas, Fernando; López-Labrador, F Xavier; Moya, Andrés; Ramon, Maria M; Castro, José A
2013-01-01
Chronic hepatitis C virus (HCV) infection is the main cause of advanced and end-stage liver disease world-wide, and an important factor of morbidity and mortality in Human Immunodeficiency virus-1 (HIV-1) co-infected individuals. Whereas the genetic variability of HCV has been studied extensively in monoinfected patients, comprehensive analyses of both patient and virus characteristics are still scarce in HCV/HIV co-infection. In order to find correlates for liver damage, we sought to analyze demographic, epidemiological and clinical features of HCV/HIV co-infected patients along with the genetic makeup of HCV (viral subtypes and lineage studied by nucleotide sequencing and phylogenetic analysis of the NS5B region). We used the Generalized Linear Model (GLM) methodology in order to integrate data from the virus and the infected host to find predictors for liver damage. The degree of liver disease was evaluated indirectly by means of two indexes (APRI and FIB-4) and accounting for the time since infection, to estimate fibrosis progression rates. Our analyses identified a reduced number of variables (both from the virus and the host) implicated in liver damage, which included the stage of HIV infection, levels of gamma-glutamil transferase and cholesterol, and some distinct HCV phylogenetic clades. PMID:23174528
Kobayashi, Etsuko; Suwazono, Yasushi; Honda, Ryumon; Dochi, Mire; Nishijo, Muneko; Kido, Teruhiko; Nakagawa, Hideaki
2009-01-01
A 10-year follow-up study was conducted to investigate the effects of renal handling of calcium (Ca) and phosphorus (P) after the removal of cadmium-polluted soil in rice paddies and replacing it with nonpolluted soil. Using a general linear mixed model, serial changes of Ca and P concentrations in urine and serum (Ca-U/S, P-U/S), fractional excretion of Ca (FECa), and percent tubular reabsorption of P (%TRP) were determined in 37 persons requiring observation in the Cd-polluted Kakehashi River Basin, Japan. Ca-U and Ca-S remained within the normal range in both sexes. FECa in men returned to the normal level within 3.3 years from the completion of soil replacement. Overall, it is suggested that the renal handling of Ca showed no or only a slight change throughout the observation period in both sexes. P-U decreased gradually. P-S showed lower than normal values in the men and values at the lower end of the normal range in women, although the values recovered gradually to normal. %TRP values remained low throughout the observation period and the values did not recover in either sex. However, the results of P-U and P-S suggested that the renal handling of P may recover after the completion of soil replacement.
Resistors Improve Ramp Linearity
NASA Technical Reports Server (NTRS)
Kleinberg, L. L.
1982-01-01
Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.
Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M Pilar
2016-01-01
The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113
Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M. Pilar
2016-01-01
The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment. PMID:27557113
Vilar, Lara; Gómez, Israel; Martínez-Vega, Javier; Echavarría, Pilar; Riaño, David; Martín, M Pilar
2016-01-01
The socio-economic factors are of key importance during all phases of wildfire management that include prevention, suppression and restoration. However, modeling these factors, at the proper spatial and temporal scale to understand fire regimes is still challenging. This study analyses socio-economic drivers of wildfire occurrence in central Spain. This site represents a good example of how human activities play a key role over wildfires in the European Mediterranean basin. Generalized Linear Models (GLM) and machine learning Maximum Entropy models (Maxent) predicted wildfire occurrence in the 1980s and also in the 2000s to identify changes between each period in the socio-economic drivers affecting wildfire occurrence. GLM base their estimation on wildfire presence-absence observations whereas Maxent on wildfire presence-only. According to indicators like sensitivity or commission error Maxent outperformed GLM in both periods. It achieved a sensitivity of 38.9% and a commission error of 43.9% for the 1980s, and 67.3% and 17.9% for the 2000s. Instead, GLM obtained 23.33, 64.97, 9.41 and 18.34%, respectively. However GLM performed steadier than Maxent in terms of the overall fit. Both models explained wildfires from predictors such as population density and Wildland Urban Interface (WUI), but differed in their relative contribution. As a result of the urban sprawl and an abandonment of rural areas, predictors like WUI and distance to roads increased their contribution to both models in the 2000s, whereas Forest-Grassland Interface (FGI) influence decreased. This study demonstrates that human component can be modelled with a spatio-temporal dimension to integrate it into wildfire risk assessment.
Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy
2016-06-01
Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion. PMID:26945472
Convective adjustment in baroclinic atmospheres
NASA Technical Reports Server (NTRS)
Emanuel, Kerry A.
1986-01-01
Local convection in planetary atmospheres is generally considered to result from the action of gravity on small regions of anomalous density. That in rotating baroclinic fluids the total potential energy for small scale convection contains a centrifugal as well as a gravitational contribution is shown. Convective adjustment in such an atmosphere results in the establishment of near adiabatic lapse rates of temperature along suitably defined surfaces of constant angular momentum, rather than in the vertical. This leads in general to sub-adiabatic vertical lapse rates. That such an adjustment actually occurs in the earth's atmosphere is shown by example and the magnitude of the effect for several other planetary atmospheres is estimated.
NASA Technical Reports Server (NTRS)
Gallimore, F. H.
1986-01-01
Adjustable angular drill block accurately transfers hole patterns from mating surfaces not normal to each other. Block applicable to transfer of nonperpendicular holes in mating contoured assemblies in aircraft industry. Also useful in general manufacturing to transfer mating installation holes to irregular and angular surfaces.
Techniques and applications of adjustable sutures.
Fells, P
1987-02-01
The 'rediscovery' of adjustable sutures some 10 years ago has given the ophthalmic surgeon much more confidence in his ability to correct strabismus. Three methods of use are described: during surgery under general anaesthesia with adjustment during the operation using the 'springback' test to centralise the eye; during surgery under general anaesthesia and subsequent adjustment under local anaesthesia using the patient's subjective responses to obtain optimal positioning; and performance of the operation and adjustment under topical local anaesthesia in one procedure. Full details are given of each technique and the indications for their application to particular problems are discussed. PMID:3297111
Sidorin, Anatoly
2010-01-05
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Jiang, Honghua; Kulkarni, Pandurang M; Mallinckrodt, Craig H; Shurzinske, Linda; Molenberghs, Geert; Lipkovich, Ilya
2015-01-01
The benefits of adjusting for baseline covariates are not as straightforward with repeated binary responses as with continuous response variables. Therefore, in this study, we compared different methods for analyzing repeated binary data through simulations when the outcome at the study endpoint is of interest. Methods compared included chi-square, Fisher's exact test, covariate adjusted/unadjusted logistic regression (Adj.logit/Unadj.logit), covariate adjusted/unadjusted generalized estimating equations (Adj.GEE/Unadj.GEE), covariate adjusted/unadjusted generalized linear mixed model (Adj.GLMM/Unadj.GLMM). All these methods preserved the type I error close to the nominal level. Covariate adjusted methods improved power compared with the unadjusted methods because of the increased treatment effect estimates, especially when the correlation between the baseline and outcome was strong, even though there was an apparent increase in standard errors. Results of the Chi-squared test were identical to those for the unadjusted logistic regression. Fisher's exact test was the most conservative test regarding the type I error rate and also with the lowest power. Without missing data, there was no gain in using a repeated measures approach over a simple logistic regression at the final time point. Analysis of results from five phase III diabetes trials of the same compound was consistent with the simulation findings. Therefore, covariate adjusted analysis is recommended for repeated binary data when the study endpoint is of interest. PMID:25866149
Jiménez Blanco, José L; Bootello, Purificación; Ortiz Mellet, Carmen; Gutiérrez Gallego, Ricardo; García Fernández, José M
2004-01-01
A blockwise iterative synthetic strategy for the preparation of linear, dendritic and branched full-carbohydrate architectures has been developed by using sugar azido(carbamate) isothiocyanates as key templates; the presence of intersaccharide thiourea bridges provides anchoring points for hydrogen bond-directed molecular recognition of phosphate esters in water.
ERIC Educational Resources Information Center
Carlson, James E.
2014-01-01
Many aspects of the geometry of linear statistical models and least squares estimation are well known. Discussions of the geometry may be found in many sources. Some aspects of the geometry relating to the partitioning of variation that can be explained using a little-known theorem of Pappus and have not been discussed previously are the topic of…
CALMAR: A New Versatile Code Library for Adjustment from Measurements
NASA Astrophysics Data System (ADS)
Grégoire, G.; Fausser, C.; Destouches, C.; Thiollay, N.
2016-02-01
CALMAR, a new library for adjustment has been developed. This code performs simultaneous shape and level adjustment of an initial prior spectrum from measured reactions rates of activation foils. It is written in C++ using the ROOT data analysis framework,with all linear algebra classes. STAYSL code has also been reimplemented in this library. Use of the code is very flexible : stand-alone, inside a C++ code, or driven by scripts. Validation and test cases are under progress. Theses cases will be included in the code package that will be available to the community. Future development are discussed. The code should support the new Generalized Nuclear Data (GND) format. This new format has many advantages compared to ENDF.
7 CFR 3.91 - Adjusted civil monetary penalties.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 1 2010-01-01 2010-01-01 false Adjusted civil monetary penalties. 3.91 Section 3.91 Agriculture Office of the Secretary of Agriculture DEBT MANAGEMENT Adjusted Civil Monetary Penalties § 3.91 Adjusted civil monetary penalties. (a) In general. (1) The Secretary will adjust the civil...
38 CFR 10.0 - Adjusted service pay entitlements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Adjusted service pay entitlements. 10.0 Section 10.0 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUSTED COMPENSATION Adjusted Compensation; General § 10.0 Adjusted service pay entitlements. A veteran entitled...
38 CFR 10.0 - Adjusted service pay entitlements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Adjusted service pay entitlements. 10.0 Section 10.0 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUSTED COMPENSATION Adjusted Compensation; General § 10.0 Adjusted service pay entitlements. A veteran entitled...
26 CFR 1.1368-2 - Accumulated adjustments account (AAA).
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Accumulated adjustments account (AAA). 1.1368-2... adjustments account (AAA). (a) Accumulated adjustments account—(1) In general. The accumulated adjustments account is an account of the S corporation and is not apportioned among shareholders. The AAA is...
ADJUSTABLE DOUBLE PULSE GENERATOR
Gratian, J.W.; Gratian, A.C.
1961-08-01
>A modulator pulse source having adjustable pulse width and adjustable pulse spacing is described. The generator consists of a cross coupled multivibrator having adjustable time constant circuitry in each leg, an adjustable differentiating circuit in the output of each leg, a mixing and rectifying circuit for combining the differentiated pulses and generating in its output a resultant sequence of negative pulses, and a final amplifying circuit for inverting and square-topping the pulses. (AEC)
NASA Astrophysics Data System (ADS)
Yamamoto, Akira; Yokoya, Kaoru
2015-02-01
An overview of linear collider programs is given. The history and technical challenges are described and the pioneering electron-positron linear collider, the SLC, is first introduced. For future energy frontier linear collider projects, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) are introduced and their technical features are discussed. The ILC is based on superconducting RF technology and the CLIC is based on two-beam acceleration technology. The ILC collaboration completed the Technical Design Report in 2013, and has come to the stage of "Design to Reality." The CLIC collaboration published the Conceptual Design Report in 2012, and the key technology demonstration is in progress. The prospects for further advanced acceleration technology are briefly discussed for possible long-term future linear colliders.
NASA Astrophysics Data System (ADS)
Yamamoto, Akira; Yokoya, Kaoru
An overview of linear collider programs is given. The history and technical challenges are described and the pioneering electron-positron linear collider, the SLC, is first introduced. For future energy frontier linear collider projects, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) are introduced and their technical features are discussed. The ILC is based on superconducting RF technology and the CLIC is based on two-beam acceleration technology. The ILC collaboration completed the Technical Design Report in 2013, and has come to the stage of "Design to Reality." The CLIC collaboration published the Conceptual Design Report in 2012, and the key technology demonstration is in progress. The prospects for further advanced acceleration technology are briefly discussed for possible long-term future linear colliders.
26 CFR 1.9001-2 - Basis adjustments for taxable years beginning on or after 1956 adjustment date.
Code of Federal Regulations, 2011 CFR
2011-04-01
... for depreciation otherwise required by section 1016(a) (2) and (3) of the Code. The adjustments...) Adjustment for depreciation sustained before March 1, 1913—(1) In general. Subsection (d)(1) of the Act requires an adjustment to be made as of the 1956 adjustment date for depreciation sustained before March...
29 CFR 785.42 - Adjusting grievances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Adjusting grievances. 785.42 Section 785.42 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL... Adjusting Grievances, Medical Attention, Civic and Charitable Work, and Suggestion Systems §...
ERIC Educational Resources Information Center
Walkiewicz, T. A.; Newby, N. D., Jr.
1972-01-01
A discussion of linear collisions between two or three objects is related to a junior-level course in analytical mechanics. The theoretical discussion uses a geometrical approach that treats elastic and inelastic collisions from a unified point of view. Experiments with a linear air track are described. (Author/TS)
Vietnamese Amerasians: Psychosocial Adjustment and Psychotherapy.
ERIC Educational Resources Information Center
Bemak, Fred; Chung, Rita Chi-Ying
1997-01-01
Reviews the literature on Amerasians and offers suggestions for directions in psychotherapy. Provides a brief chronology of Amerasian emigration and associated psychological issues, followed by a discussion of myths and generalizations about Amerasians, research findings, and adjustment issues. (RJM)
47 CFR 1.1117 - Adjustments to charges.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Adjustments to charges. 1.1117 Section 1.1117 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection... errors made during an adjustment cycle....
47 CFR 1.1117 - Adjustments to charges.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Adjustments to charges. 1.1117 Section 1.1117 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection... errors made during an adjustment cycle....
47 CFR 1.1117 - Adjustments to charges.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Adjustments to charges. 1.1117 Section 1.1117 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection... errors made during an adjustment cycle....
Determining the Goals and Techniques of Adjustment Services
ERIC Educational Resources Information Center
Baker, Richard J.
1972-01-01
This article suggests a structure for determining some specific goals of adjustment services and discusses the definition, objectives, merits, and problems pertaining to six general adjustment techniques that are felt to be appropriate for use in rehabilitation facilities. (Author)
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
16 CFR 1.98 - Adjustment of civil monetary penalty amounts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... OF PRACTICE GENERAL PROCEDURES Civil Penalty Adjustments Under the Federal Civil Penalties Inflation... monetary penalty amounts. This section makes inflation adjustments in the dollar amounts of civil...
16 CFR 1.98 - Adjustment of civil monetary penalty amounts.
Code of Federal Regulations, 2012 CFR
2012-01-01
... OF PRACTICE GENERAL PROCEDURES Civil Penalty Adjustments Under the Federal Civil Penalties Inflation... monetary penalty amounts. This section makes inflation adjustments in the dollar amounts of civil...
16 CFR 1.98 - Adjustment of civil monetary penalty amounts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF PRACTICE GENERAL PROCEDURES Civil Penalty Adjustments Under the Federal Civil Penalties Inflation... monetary penalty amounts. This section makes inflation adjustments in the dollar amounts of civil...
16 CFR 1.98 - Adjustment of civil monetary penalty amounts.
Code of Federal Regulations, 2013 CFR
2013-01-01
... OF PRACTICE GENERAL PROCEDURES Civil Penalty Adjustments Under the Federal Civil Penalties Inflation... monetary penalty amounts. This section makes inflation adjustments in the dollar amounts of civil...
González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M
2013-01-01
In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature
González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M
2013-01-01
In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature
NASA Astrophysics Data System (ADS)
Nickel, Stefan; Hertel, Anne; Pesch, Roland; Schröder, Winfried; Steinnes, Eiliv; Uggerud, Hilde Thelle
2014-12-01
Objective. This study explores the statistical relations between the accumulation of heavy metals in moss and natural surface soil and potential influencing factors such as atmospheric deposition by use of multivariate regression-kriging and generalized linear models. Based on data collected in 1995, 2000, 2005 and 2010 throughout Norway the statistical correlation of a set of potential predictors (elevation, precipitation, density of different land uses, population density, physical properties of soil) with concentrations of cadmium (Cd), mercury and lead in moss and natural surface soil (response variables), respectively, were evaluated. Spatio-temporal trends were estimated by applying generalized linear models and geostatistics on spatial data covering Norway. The resulting maps were used to investigate to what extent the HM concentrations in moss and natural surface soil are correlated. Results. From a set of ten potential predictor variables the modelled atmospheric deposition showed the highest correlation with heavy metals concentrations in moss and natural surface soil. Density of various land uses in a 5 km radius reveal significant correlations with lead and cadmium concentration in moss and mercury concentration in natural surface soil. Elevation also appeared as a relevant factor for accumulation of lead and mercury in moss and cadmium in natural surface soil respectively. Precipitation was found to be a significant factor for cadmium in moss and mercury in natural surface soil. The integrated use of multivariate generalized linear models and kriging interpolation enabled creating heavy metals maps at a high level of spatial resolution. The spatial patterns of cadmium and lead concentrations in moss and natural surface soil in 1995 and 2005 are similar. The heavy metals concentrations in moss and natural surface soil are correlated significantly with high coefficients for lead, medium for cadmium and moderate for mercury. From 1995 up to 2010 the
Article mounting and position adjustment stage
Cutburth, R.W.; Silva, L.L.
1988-05-10
An improved adjustment and mounting stage of the type used for the detection of laser beams is disclosed. A ring sensor holder has locating pins on a first side thereof which are positioned within a linear keyway in a surrounding housing for permitting reciprocal movement of the ring along the keyway. A rotatable ring gear is positioned within the housing on the other side of the ring from the linear keyway and includes an oval keyway which drives the ring along the linear keyway upon rotation of the gear. Motor-driven single-stage and dual (x, y) stage adjustment systems are disclosed which are of compact construction and include a large laser transmission hole. 6 figs.
Article mounting and position adjustment stage
Cutburth, Ronald W.; Silva, Leonard L.
1988-01-01
An improved adjustment and mounting stage of the type used for the detection of laser beams is disclosed. A ring sensor holder has locating pins on a first side thereof which are positioned within a linear keyway in a surrounding housing for permitting reciprocal movement of the ring along the keyway. A rotatable ring gear is positioned within the housing on the other side of the ring from the linear keyway and includes an oval keyway which drives the ring along the linear keyway upon rotation of the gear. Motor-driven single-stage and dual (x, y) stage adjustment systems are disclosed which are of compact construction and include a large laser transmission hole.
NASA Astrophysics Data System (ADS)
Jacobs, J. M.; Meagher, W.; Daniel, J.; Linder, E.
2011-12-01
The Intergovernmental Panel on Climate Change attributes the observed pattern of change to the influence of anthropogenic forcing, stating that it is extremely unlikely that the global pattern of warming can be explained without external forcing, and that it is very likely the greenhouse gases caused the warming globally over the last 50 years. Consequently, much effort has been focused on understanding the contribution of road transportation to the emissions of greenhouse gases. Striking little research has been conducted to understand the implications of climate change on the performance and design of road networks. When using water and energy balance approaches, climate is an integral part of modeling pavement deterioration processes including rutting, thermal cracking, frost heave, and thaw weakening. The potential of climate change raises the possibility that the frequency, duration, and severity of these deterioration processes may increase. This research explores the value of NARCCAP climate data sets in transportation infrastructure models. Here, we present a general methodology to demonstrate how built infrastructure might from an effort to use various RCM climate scenarios and pavement designs to quantify the climate change impact on pavement performance using a case study approach. We present challenges and results in using the Regional Climate Model datasets as inputs, through intermediary hydrologic functions, into the Federal Department of Transportation's Mechanistic-Empirical Pavement Design Guide Model.
McKenzie, K.R.
1959-07-01
An electrode support which permits accurate alignment and adjustment of the electrode in a plurality of planes and about a plurality of axes in a calutron is described. The support will align the slits in the electrode with the slits of an ionizing chamber so as to provide for the egress of ions. The support comprises an insulator, a leveling plate carried by the insulator and having diametrically opposed attaching screws screwed to the plate and the insulator and diametrically opposed adjusting screws for bearing against the insulator, and an electrode associated with the plate for adjustment therewith.
Resonance Parameter Adjustment Based on Integral Experiments
Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; Forget, Benoit
2016-06-02
Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less
Remotely Adjustable Hydraulic Pump
NASA Technical Reports Server (NTRS)
Kouns, H. H.; Gardner, L. D.
1987-01-01
Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
12 CFR 925.22 - Adjustments in stock holdings.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Adjustments in stock holdings. 925.22 Section... ASSOCIATES MEMBERS OF THE BANKS Stock Requirements § 925.22 Adjustments in stock holdings. (a) Adjustment in general. A Bank may from time to time increase or decrease the amount of stock any member is required...
49 CFR 1022.3 - Civil monetary penalty inflation adjustment.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 8 2014-10-01 2014-10-01 false Civil monetary penalty inflation adjustment. 1022... TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION GENERAL RULES AND REGULATIONS CIVIL MONETARY PENALTY INFLATION ADJUSTMENT § 1022.3 Civil monetary penalty inflation adjustment. The Board shall, immediately, and at...
49 CFR 1022.3 - Civil monetary penalty inflation adjustment.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 8 2013-10-01 2013-10-01 false Civil monetary penalty inflation adjustment. 1022... TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION GENERAL RULES AND REGULATIONS CIVIL MONETARY PENALTY INFLATION ADJUSTMENT § 1022.3 Civil monetary penalty inflation adjustment. The Board shall, immediately, and at...
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds
These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.
Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.
NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
Linear Programming Problems for Generalized Uncertainty
ERIC Educational Resources Information Center
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
Simple, Internally Adjustable Valve
NASA Technical Reports Server (NTRS)
Burley, Richard K.
1990-01-01
Valve containing simple in-line, adjustable, flow-control orifice made from ordinary plumbing fitting and two allen setscrews. Construction of valve requires only simple drilling, tapping, and grinding. Orifice installed in existing fitting, avoiding changes in rest of plumbing.
NASA Technical Reports Server (NTRS)
1986-01-01
Corning Glass Works' Serengeti Driver sunglasses are unique in that their lenses self-adjust and filter light while suppressing glare. They eliminate more than 99% of the ultraviolet rays in sunlight. The frames are based on the NASA Anthropometric Source Book.
ERIC Educational Resources Information Center
Abramson, Jane A.
Personal interviews with 100 former farm operators living in Saskatoon, Saskatchewan, were conducted in an attempt to understand the nature of the adjustment process caused by migration from rural to urban surroundings. Requirements for inclusion in the study were that respondents had owned or operated a farm for at least 3 years, had left their…
Hunter, Steven L.
2002-01-01
An inclinometer utilizing synchronous demodulation for high resolution and electronic offset adjustment provides a wide dynamic range without any moving components. A device encompassing a tiltmeter and accompanying electronic circuitry provides quasi-leveled tilt sensors that detect highly resolved tilt change without signal saturation.
21 CFR 880.5110 - Hydraulic adjustable hospital bed.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hydraulic adjustable hospital bed. 880.5110... (CONTINUED) MEDICAL DEVICES GENERAL HOSPITAL AND PERSONAL USE DEVICES General Hospital and Personal Use Therapeutic Devices § 880.5110 Hydraulic adjustable hospital bed. (a) Identification. A hydraulic...
21 CFR 880.5120 - Manual adjustable hospital bed.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Manual adjustable hospital bed. 880.5120 Section... (CONTINUED) MEDICAL DEVICES GENERAL HOSPITAL AND PERSONAL USE DEVICES General Hospital and Personal Use Therapeutic Devices § 880.5120 Manual adjustable hospital bed. (a) Identification. A manual...
Drift tube suspension for high intensity linear accelerators
Liska, Donald J.; Schamaun, Roger G.; Clark, Donald C.; Potter, R. Christopher; Frank, Joseph A.
1982-01-01
The disclosure relates to a drift tube suspension for high intensity linear accelerators. The system comprises a series of box-sections girders independently adjustably mounted on a linear accelerator. A plurality of drift tube holding stems are individually adjustably mounted on each girder.
Drift tube suspension for high intensity linear accelerators
Liska, D.J.; Schamaun, R.G.; Clark, D.C.; Potter, R.C.; Frank, J.A.
1980-03-11
The disclosure relates to a drift tube suspension for high intensity linear accelerators. The system comprises a series of box-sections girders independently adjustably mounted on a linear accelerator. A plurality of drift tube holding stems are individually adjustably mounted on each girder.
ERIC Educational Resources Information Center
Joseph, Dan; Hartman, Gregory; Gibson, Caleb
2011-01-01
In this article we explore the consequences of modifying the common definition of a parabola by considering the locus of all points equidistant from a focus and (not necessarily linear) directrix. The resulting derived curves, which we call "generalized parabolas," are often quite beautiful and possess many interesting properties. We show that…
Romanticism and Marital Adjustment
ERIC Educational Resources Information Center
Spanier, Graham B.
1972-01-01
It is concluded that romanticism does not appear to be harmful to marriage relationships in particular or the family system in general, and is therefore not generally dysfunctional in our society. (Author)
Cutburth, Ronald W.; Silva, Leonard L.
1988-01-01
An improved mounting stage of the type used for the detection of laser beams is disclosed. A stage center block is mounted on each of two opposite sides by a pair of spaced ball bearing tracks which provide stability as well as simplicity. The use of the spaced ball bearing pairs in conjunction with an adjustment screw which also provides support eliminates extraneous stabilization components and permits maximization of the area of the center block laser transmission hole.
Ducker, W.L.
1982-09-14
A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.
Ducker, W.L.
1980-01-15
A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.
Ducker, W.L.
1982-09-07
A system of rotatably and pivotally mounted radially extended bent supports for radially extending windmill rotor vanes in combination with axially movable radially extended control struts connected to the vanes with semi-automatic and automatic torque and other sensing and servo units provide automatic adjustment of the windmill vanes relative to their axes of rotation to produce mechanical output at constant torque or at constant speed or electrical quantities dependent thereon.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
Anchoring and adjustment during social inferences.
Tamir, Diana I; Mitchell, Jason P
2013-02-01
Simulation theories of social cognition suggest that people use their own mental states to understand those of others-particularly similar others. However, perceivers cannot rely solely on self-knowledge to understand another person; they must also correct for differences between the self and others. Here we investigated serial adjustment as a mechanism for correction from self-knowledge anchors during social inferences. In 3 studies, participants judged the attitudes of a similar or dissimilar person and reported their own attitudes. For each item, we calculated the discrepancy between responses for the self and other. The adjustment process unfolds serially, so to the extent that individuals indeed anchor on self-knowledge and then adjust away, trials with a large amount of self-other discrepancy should be associated with longer response times, whereas small self-other discrepancy should correspond to shorter response times. Analyses consistently revealed this positive linear relationship between reaction time and self-other discrepancy, evidence of anchoring-and-adjustment, but only during judgments of similar targets. These results suggest that perceivers mentalize about similar others using the cognitive process of anchoring-and-adjustment. PMID:22506753
45 CFR 84.44 - Academic adjustments.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Academic adjustments. 84.44 Section 84.44 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION NONDISCRIMINATION ON THE BASIS OF... with manual impairments, and other similar services and actions. Recipients need not provide...
14 CFR Appendix - Example of SIFL Adjustment
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Example of SIFL Adjustment Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) POLICY STATEMENTS STATEMENTS OF GENERAL POLICY Policies Relating to Rates and Tariffs Treatment of deferred Federal income taxes for rate purposes. Pt. 399, Subpt....
14 CFR Appendix - Example of SIFL Adjustment
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Example of SIFL Adjustment Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) POLICY STATEMENTS STATEMENTS OF GENERAL POLICY Policies Relating to Rates and Tariffs Treatment of deferred Federal income taxes for rate purposes. Pt. 399, Subpt....
Parenting Practices, Child Adjustment, and Family Diversity.
ERIC Educational Resources Information Center
Amato, Paul R.; Fowler, Frieda
2002-01-01
Uses data from the National Survey of Families and Households to test the generality of the links between parenting practices and child outcomes. Parents' reports of support, monitoring, and harsh punishment were associated in the expected direction with parents' reports of children's adjustment, school grades, and behavior problems, and with…
Multiple comparisons for survival data with propensity score adjustment
Zhu, Hong; Lu, Bo
2015-01-01
This article considers the practical problem in clinical and observational studies where multiple treatment or prognostic groups are compared and the observed survival data are subject to right censoring. Two possible formulations of multiple comparisons are suggested. Multiple Comparisons with a Control (MCC) compare every other group to a control group with respect to survival outcomes, for determining which groups are associated with lower risk than the control. Multiple Comparisons with the Best (MCB) compare each group to the truly minimum risk group and identify the groups that are either with the minimum risk or the practically minimum risk. To make a causal statement, potential confounding effects need to be adjusted in the comparisons. Propensity score based adjustment is popular in causal inference and can effectively reduce the confounding bias. Based on a propensity-score-stratified Cox proportional hazards model, the approaches of MCC test and MCB simultaneous confidence intervals for general linear models with normal error outcome are extended to survival outcome. This paper specifies the assumptions for causal inference on survival outcomes within a potential outcome framework, develops testing procedures for multiple comparisons and provides simultaneous confidence intervals. The proposed methods are applied to two real data sets from cancer studies for illustration, and a simulation study is also presented. PMID:25663729
Role of Osmotic Adjustment in Plant Productivity
Gebre, G.M.
2001-01-11
clones (P. trichocurpa Torr. & Gray x P: deltoides Bartr., TD and P. deltoides x P. nigra L., DN), we determined the TD clone, which was more productive during the first three years, had slightly lower osmotic potential than the DN clone, and also indicated a small osmotic adjustment compared with the DN hybrid. However, the productivity differences were negligible by the fifth growing season. In a separate study with several P. deltoides clones, we did not observe a consistent relationship between growth and osmotic adjustment. Some clones that had low osmotic potential and osmotic adjustment were as productive as another clone that had high osmotic potential. The least productive clone also had low osmotic potential and osmotic adjustment. The absence of a correlation may have been partly due to the fact that all clones were capable of osmotic adjustment and had low osmotic potential. In a study involving an inbred three-generation TD F{sub 2} pedigree (family 331), we did not observe a correlation between relative growth rate and osmotic potential or osmotic adjustment. However, when clones that exhibited osmotic adjustment were analyzed, there was a negative correlation between growth and osmotic potential, indicating clones with lower osmotic potential were more productive. This was observed only in clones that were exposed to drought. Although the absolute osmotic potential varied by growing environment, the relative ranking among progenies remains generally the same, suggesting that osmotic potential is genetically controlled. We have identified a quantitative trait locus for osmotic potential in another three-generation TD F{sub 2} pedigree (family 822). Unlike the many studies in agricultural crops, most of the forest tree studies were not based on plants exposed to severe stress to determine the role of osmotic adjustment. Future studies should consider using clones that are known to be productive but have contrasting osmotic adjustment capability as well as
Vivilaki, Victoria G; Dafermos, Vassilis; Gevorgian, Liana; Dimopoulou, Athanasia; Patelarou, Evridiki; Bick, Debra; Tsopelas, Nicholas D; Lionis, Christos
2012-01-01
The Maternal Adjustment and Maternal Attitudes Scale is a self- administered scale, designed for use in primary care settings to identify postpartum maternal adjustment problems regarding body image, sex, somatic symptoms, and marital relationships. Women were recruited within four weeks of giving birth. Responses to the Maternal Adjustment and Maternal Attitudes Scale were compared for agreement with responses to the Edinburgh Postnatal Depression Scale as a gold standard. Psychometric measurements included: reliability coefficients, explanatory factor analysis, and confirmatory analysis by linear structural relations. A receiver operating characteristic analysis was carried out to evaluate the global functioning of the scale. Of 300 mothers screened, 121 (40.7%) were experiencing difficulties in maternal adjustment and maternal attitudes. Scores on the Maternal Adjustment and Maternal Attitudes Scale correlated well with those on the Edinburgh Postnatal Depression Scale. The internal consistency of the Maternal Adjustment and Maternal Attitudes Scale, Greek version-tested using Cronbach's alpha coefficient-was 0.859, and that of Guttman split-half coefficient was 0.820. Findings confirmed the multidimensionality of the Maternal Adjustment and Maternal Attitudes Scale, demonstrating a six-factor structure. The area under the receiver operating characteristic curve was 0.610, and the logistic estimate for the threshold score of 57/58 fitted the model sensitivity at 68% and model specificity at 64.6%. Data confirmed that the Greek version of the Maternal Adjustment and Maternal Attitudes Scale is a reliable and valid screening tool for both clinical practice and research purposes to detect postpartum adjustment difficulties.
Labrada-Martagón, Vanessa; Méndez-Rodríguez, Lia C; Mangel, Marc; Zenteno-Savín, Tania
2013-09-01
Generalized linear models were fitted to evaluate the relationship between 17β-estradiol (E2), testosterone (T) and thyroxine (T4) levels in immature East Pacific green sea turtles (Chelonia mydas) and their body condition, size, mass, blood biochemistry parameters, handling time, year, season and site of capture. According to external (tail size) and morphological (<77.3 straight carapace length) characteristics, 95% of the individuals were juveniles. Hormone levels, assessed on sea turtles subjected to a capture stress protocol, were <34.7nmolTL(-1), <532.3pmolE2 L(-1) and <43.8nmolT4L(-1). The statistical model explained biologically plausible metabolic relationships between hormone concentrations and blood biochemistry parameters (e.g. glucose, cholesterol) and the potential effect of environmental variables (season and study site). The variables handling time and year did not contribute significantly to explain hormone levels. Differences in sex steroids between season and study sites found by the models coincided with specific nutritional, physiological and body condition differences related to the specific habitat conditions. The models correctly predicted the median levels of the measured hormones in green sea turtles, which confirms the fitted model's utility. It is suggested that quantitative predictions could be possible when the model is tested with additional data.
Subsea adjustable choke valves
Cyvas, M.K. )
1989-08-01
With emphasis on deepwater wells and marginal offshore fields growing, the search for reliable subsea production systems has become a high priority. A reliable subsea adjustable choke is essential to the realization of such a system, and recent advances are producing the degree of reliability required. Technological developments have been primarily in (1) trim material (including polycrystalline diamond), (2) trim configuration, (3) computer programs for trim sizing, (4) component materials, and (5) diver/remote-operated-vehicle (ROV) interfaces. These five facets are overviewed and progress to date is reported. A 15- to 20-year service life for adjustable subsea chokes is now a reality. Another factor vital to efficient use of these technological developments is to involve the choke manufacturer and ROV/diver personnel in initial system conceptualization. In this manner, maximum benefit can be derived from the latest technology. Major areas of development still required and under way are listed, and the paper closes with a tabulation of successful subsea choke installations in recent years.
42 CFR 419.43 - Adjustments to national program payment and beneficiary copayment amounts.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Payment adjustment for certain cancer hospitals.—(1) General rule. CMS provides for a payment adjustment... (PCR) before the cancer hospital payment adjustment (as determined by the Secretary at cost report...-cost ratio (PCR) before the cancer hospital payment adjustment (as determined by the Secretary at...
Bailit, Jennifer L.; Grobman, William A.; Rice, Madeline Murguia; Spong, Catherine Y.; Wapner, Ronald J.; Varner, Michael W.; Thorp, John M.; Leveno, Kenneth J.; Caritis, Steve N.; Shubert, Phillip J.; Tita, Alan T. N.; Saade, George; Sorokin, Yoram; Rouse, Dwight J.; Blackwell, Sean C.; Tolosa, Jorge E.; Van Dorsten, J. Peter
2014-01-01
Objective Regulatory bodies and insurers evaluate hospital quality using obstetrical outcomes, however meaningful comparisons should take pre-existing patient characteristics into account. Furthermore, if risk-adjusted outcomes are consistent within a hospital, fewer measures and resources would be needed to assess obstetrical quality. Our objective was to establish risk-adjusted models for five obstetric outcomes and assess hospital performance across these outcomes. Study Design A cohort study of 115,502 women and their neonates born in 25 hospitals in the United States between March 2008 and February 2011. Hospitals were ranked according to their unadjusted and risk-adjusted frequency of venous thromboembolism, postpartum hemorrhage, peripartum infection, severe perineal laceration, and a composite neonatal adverse outcome. Correlations between hospital risk-adjusted outcome frequencies were assessed. Results Venous thromboembolism occurred too infrequently (0.03%, 95% CI 0.02% – 0.04%) for meaningful assessment. Other outcomes occurred frequently enough for assessment (postpartum hemorrhage 2.29% (95% CI 2.20–2.38), peripartum infection 5.06% (95% CI 4.93–5.19), severe perineal laceration at spontaneous vaginal delivery 2.16% (95% CI 2.06–2.27), neonatal composite 2.73% (95% CI 2.63–2.84)). Although there was high concordance between unadjusted and adjusted hospital rankings, several individual hospitals had an adjusted rank that was substantially different (as much as 12 rank tiers) than their unadjusted rank. None of the correlations between hospital adjusted outcome frequencies was significant. For example, the hospital with the lowest adjusted frequency of peripartum infection had the highest adjusted frequency of severe perineal laceration. Conclusions Evaluations based on a single risk-adjusted outcome cannot be generalized to overall hospital obstetric performance. PMID:23891630
Adolescent Mothers' Adjustment to Parenting.
ERIC Educational Resources Information Center
Samuels, Valerie Jarvis; And Others
1994-01-01
Examined adolescent mothers' adjustment to parenting, self-esteem, social support, and perceptions of baby. Subjects (n=52) responded to questionnaires at two time periods approximately six months apart. Mothers with higher self-esteem at Time 1 had better adjustment at Time 2. Adjustment was predicted by Time 2 variables; contact with baby's…
Adolescent suicide attempts and adult adjustment
Brière, Frédéric N.; Rohde, Paul; Seeley, John R.; Klein, Daniel; Lewinsohn, Peter M.
2014-01-01
Background Adolescent suicide attempts are disproportionally prevalent and frequently of low severity, raising questions regarding their long-term prognostic implications. In this study, we examined whether adolescent attempts were associated with impairments related to suicidality, psychopathology, and psychosocial functioning in adulthood (objective 1) and whether these impairments were better accounted for by concurrent adolescent confounders (objective 2). Method 816 adolescents were assessed using interviews and questionnaires at four time points from adolescence to adulthood. We examined whether lifetime suicide attempts in adolescence (by T2, mean age 17) predicted adult outcomes (by T4, mean age 30) using linear and logistic regressions in unadjusted models (objective 1) and adjusting for sociodemographic background, adolescent psychopathology, and family risk factors (objective 2). Results In unadjusted analyses, adolescent suicide attempts predicted poorer adjustment on all outcomes, except those related to social role status. After adjustment, adolescent attempts remained predictive of axis I and II psychopathology (anxiety disorder, antisocial and borderline personality disorder symptoms), global and social adjustment, risky sex, and psychiatric treatment utilization. However, adolescent attempts no longer predicted most adult outcomes, notably suicide attempts and major depressive disorder. Secondary analyses indicated that associations did not differ by sex and attempt characteristics (intent, lethality, recurrence). Conclusions Adolescent suicide attempters are at high risk of protracted and wide-ranging impairments, regardless of the characteristics of their attempt. Although attempts specifically predict (and possibly influence) several outcomes, results suggest that most impairments reflect the confounding contributions of other individual and family problems or vulnerabilites in adolescent attempters. PMID:25421360
Sparse linear programming subprogram
Hanson, R.J.; Hiebert, K.L.
1981-12-01
This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.
NASA Astrophysics Data System (ADS)
Liu, H.
2005-12-01
A stratified atmosphere can be essentially characterized by systems subject to stochastic forcing and threshold adjustment due to convective and shear instability, and the vertical transport can be approximated by eddy diffusion/viscosity. In the linear limit, the equations can be solved explicitly, and the spectra could be determined and are shown to follow power-law distributions, and the eddy transport coefficients are scale independent. The nonlinear equations are also shown to support scale invariance under rather general conditions, and the power-law indices of the spectra are derived from the analysis. These indices, as well as those in the linear limit, are confirmed by numerical simulations. The power-law indices of the ``universal'' spectra of temperature and horizontal wind versus vertical wavenumber and frequency from previous observations are shown to fall in the range determined by the linear and nonlinear limit. This theory, therefore, provides a possible explanation to the universal vertical wavenumber and frequency spectra and their variability. By relating the universal spectra with the sporadic threshold adjustment due to convective or shear instability, which is ubiquitous in stratified fluid systems, the difficulty of previous theories to associate the time and location independent spectral feature with the highly time and location dependent gravity waves is avoided. The analysis also suggests that the vertical eddy transport coefficients are scale dependent, and the implication of this scale dependence will be explored.
45 CFR 92.51 - Later disallowances and adjustments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Later disallowances and adjustments. 92.51 Section 92.51 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION UNIFORM...-the-Grant Requirements § 92.51 Later disallowances and adjustments. The closeout of a grant does...
Gender Identity and Adjustment in Black, Hispanic, and White Preadolescents
ERIC Educational Resources Information Center
Corby, Brooke C.; Hodges, Ernest V. E.; Perry, David G.
2007-01-01
The generality of S. K. Egan and D. G. Perry's (2001) model of gender identity and adjustment was evaluated by examining associations between gender identity (felt gender typicality, felt gender contentedness, and felt pressure for gender conformity) and social adjustment in 863 White, Black, and Hispanic 5th graders (mean age = 11.1 years).…
78 FR 56868 - Adjustment of Indemnification for Inflation
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-16
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Adjustment of Indemnification for Inflation AGENCY: Office of General Counsel, U.S Department of Energy. ACTION: Notice of adjusted indemnification amount. SUMMARY: The Department of Energy (DOE) is...
Preadolescent Friendship and Peer Rejection as Predictors of Adult Adjustment.
ERIC Educational Resources Information Center
Bagwell, Catherine L.; Newcomb, Andrew F.; Bukowski, William M.
1998-01-01
Compared adjustment of 30 young adults who had a stable, reciprocal best friend in fifth grade and 30 who did not. Found that lower peer rejection uniquely predicted overall life status adjustment. Friended preadolescents had higher general self-worth in adulthood, even after controlling for perceived preadolescence competence. Peer rejection and…
37 CFR 1.705 - Patent term adjustment determination.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Patent term adjustment determination. 1.705 Section 1.705 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent...
37 CFR 1.705 - Patent term adjustment determination.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Patent term adjustment determination. 1.705 Section 1.705 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent...
37 CFR 1.705 - Patent term adjustment determination.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Patent term adjustment determination. 1.705 Section 1.705 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent...
37 CFR 1.705 - Patent term adjustment determination.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Patent term adjustment determination. 1.705 Section 1.705 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent...
37 CFR 1.705 - Patent term adjustment determination.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Patent term adjustment determination. 1.705 Section 1.705 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES Adjustment and Extension of Patent...
50 CFR 665.18 - Framework adjustments to management measures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Framework adjustments to management... PACIFIC General § 665.18 Framework adjustments to management measures. Framework measures described below... fishery. The following framework process authorizes the implementation of measures that may affect...
8 CFR 280.53 - Civil monetary penalties inflation adjustment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 8 Aliens and Nationality 1 2014-01-01 2014-01-01 false Civil monetary penalties inflation... REGULATIONS IMPOSITION AND COLLECTION OF FINES § 280.53 Civil monetary penalties inflation adjustment. (a) In general. In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act...
8 CFR 1280.53 - Civil monetary penalties inflation adjustment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Civil monetary penalties inflation... penalties inflation adjustment. (a) In general. In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act of 1990, Pub. L. 101-410, 104 Stat. 890, as amended by the...
8 CFR 280.53 - Civil monetary penalties inflation adjustment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 8 Aliens and Nationality 1 2011-01-01 2011-01-01 false Civil monetary penalties inflation... REGULATIONS IMPOSITION AND COLLECTION OF FINES § 280.53 Civil monetary penalties inflation adjustment. (a) In general. In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act...
8 CFR 280.53 - Civil monetary penalties inflation adjustment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 8 Aliens and Nationality 1 2013-01-01 2013-01-01 false Civil monetary penalties inflation... REGULATIONS IMPOSITION AND COLLECTION OF FINES § 280.53 Civil monetary penalties inflation adjustment. (a) In general. In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act...
8 CFR 280.53 - Civil monetary penalties inflation adjustment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Civil monetary penalties inflation... REGULATIONS IMPOSITION AND COLLECTION OF FINES § 280.53 Civil monetary penalties inflation adjustment. (a) In general. In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act...
8 CFR 1280.53 - Civil monetary penalties inflation adjustment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 8 Aliens and Nationality 1 2011-01-01 2011-01-01 false Civil monetary penalties inflation... penalties inflation adjustment. (a) In general. In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act of 1990, Pub. L. 101-410, 104 Stat. 890, as amended by the...
New schemes in the adjustment of bendable, elliptical mirrors using a long trace profiler
Rah, S.
1997-08-01
The Long Trace Profiler (LTP), an instrument for measuring the slope profile of long X-ray mirrors, has been used for adjusting bendable mirrors. Often an elliptical profile is desired for the mirror surface, since many synchrotron applications involve imaging a point source to a point image. Several techniques have been used in the past for adjusting the profile measured in height or slope of a bendable mirror. Underwood et al. have used collimated X-rays for achieving desired surface shape for bent glass optics. Non linear curve fitting using the simplex algorithm was later used to determine the best fit ellipse to the surface under test. A more recent method uses a combination of least squares polynomial fitting to the measured slope function in order to enable rapid adjustment to the desired shape. The mirror has mechanical adjustments corresponding to the first and second order terms of the desired slope polynomial, which correspond to defocus and coma, respectively. The higher order terms are realized by shaping the width of the mirror to produce the optimal elliptical surface when bent. The difference between desired and measured surface slope profiles allows us to make methodical adjustments to the bendable mirror based on changes in the signs and magnitudes of the polynomial coefficients. This technique gives rapid convergence to the desired shape of the measured surface, even when we have no information about the bender, other than the desired shape of the optical surface. Nonlinear curve fitting can be used at the end of the process for fine adjustments, and to determine the over all best fit parameters of the surface. This technique could be generalized to other shapes such as toroids.
Hernández Suárez, Marcos; Astray Dopazo, Gonzalo; Larios López, Dina; Espinosa, Francisco
2015-01-01
There are a large number of tomato cultivars with a wide range of morphological, chemical, nutritional and sensorial characteristics. Many factors are known to affect the nutrient content of tomato cultivars. A complete understanding of the effect of these factors would require an exhaustive experimental design, multidisciplinary scientific approach and a suitable statistical method. Some multivariate analytical techniques such as Principal Component Analysis (PCA) or Factor Analysis (FA) have been widely applied in order to search for patterns in the behaviour and reduce the dimensionality of a data set by a new set of uncorrelated latent variables. However, in some cases it is not useful to replace the original variables with these latent variables. In this study, Automatic Interaction Detection (AID) algorithm and Artificial Neural Network (ANN) models were applied as alternative to the PCA, AF and other multivariate analytical techniques in order to identify the relevant phytochemical constituents for characterization and authentication of tomatoes. To prove the feasibility of AID algorithm and ANN models to achieve the purpose of this study, both methods were applied on a data set with twenty five chemical parameters analysed on 167 tomato samples from Tenerife (Spain). Each tomato sample was defined by three factors: cultivar, agricultural practice and harvest date. General Linear Model linked to AID (GLM-AID) tree-structured was organized into 3 levels according to the number of factors. p-Coumaric acid was the compound the allowed to distinguish the tomato samples according to the day of harvest. More than one chemical parameter was necessary to distinguish among different agricultural practices and among the tomato cultivars. Several ANN models, with 25 and 10 input variables, for the prediction of cultivar, agricultural practice and harvest date, were developed. Finally, the models with 10 input variables were chosen with fit’s goodness between 44 and
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
GGOPT: an unconstrained non-linear optimizer.
Bassingthwaighte, J B; Chan, I S; Goldstein, A A; Russak, I B
1988-01-01
GGOPT is a derivative-free non-linear optimizer for smooth functions with added noise. If the function values arise from observations or from extensive computations, these errors can be considerable. GGOPT uses an adjustable mesh together with linear least squares to find smoothed values of the function, gradient and Hessian at the center of the mesh. These values drive a descent method that estimates optimal parameters. The smoothed values usually result in increased accuracy.
Elliptically polarizing adjustable phase insertion device
Carr, Roger
1995-01-01
An insertion device for extracting polarized electromagnetic energy from a beam of particles is disclosed. The insertion device includes four linear arrays of magnets which are aligned with the particle beam. The magnetic field strength to which the particles are subjected is adjusted by altering the relative alignment of the arrays in a direction parallel to that of the particle beam. Both the energy and polarization of the extracted energy may be varied by moving the relevant arrays parallel to the beam direction. The present invention requires a substantially simpler and more economical superstructure than insertion devices in which the magnetic field strength is altered by changing the gap between arrays of magnets.
Delay Adjusted Incidence Infographic
This Infographic shows the National Cancer Institute SEER Incidence Trends. The graphs show the Average Annual Percent Change (AAPC) 2002-2011. For Men, Thyroid: 5.3*,Liver & IBD: 3.6*, Melanoma: 2.3*, Kidney: 2.0*, Myeloma: 1.9*, Pancreas: 1.2*, Leukemia: 0.9*, Oral Cavity: 0.5, Non-Hodgkin Lymphoma: 0.3*, Esophagus: -0.1, Brain & ONS: -0.2*, Bladder: -0.6*, All Sites: -1.1*, Stomach: -1.7*, Larynx: -1.9*, Prostate: -2.1*, Lung & Bronchus: -2.4*, and Colon & Rectum: -3/0*. For Women, Thyroid: 5.8*, Liver & IBD: 2.9*, Myeloma: 1.8*, Kidney: 1.6*, Melanoma: 1.5, Corpus & Uterus: 1.3*, Pancreas: 1.1*, Leukemia: 0.6*, Brain & ONS: 0, Non-Hodgkin Lymphoma: -0.1, All Sites: -0.1, Breast: -0.3, Stomach: -0.7*, Oral Cavity: -0.7*, Bladder: -0.9*, Ovary: -0.9*, Lung & Bronchus: -1.0*, Cervix: -2.4*, and Colon & Rectum: -2.7*. * AAPC is significantly different from zero (p<.05). Rates were adjusted for reporting delay in the registry. www.cancer.gov Source: Special section of the Annual Report to the Nation on the Status of Cancer, 1975-2011.
Adjustments in Rural Education.
ERIC Educational Resources Information Center
Dawson, Howard A., Ed.
This 1937 compilation of articles covers a wide range of problems within the scope of rural public education. The rural education issues discussed fall under the following general headings: (1) professional leadership; (2) rural school supervision; (3) staff training; (4) rural school district organization; (5) physical plants and equipment; and…
Glosup, J.
1992-07-23
The class of gene linear models is extended to develop a class of nonparametric regression models known as generalized smooth models. The technique of local scoring is used to estimate a generalized smooth model and the estimation procedure based on locally weighted regression is shown to produce local likelihood estimates. The asymptotically correct distribution of the deviance difference is derived and its use in comparing the fits of generalized linear models and generalized smooth models is illustrated. The relationship between generalized smooth models and generalized additive models is discussed, also.
LRGS: Linear Regression by Gibbs Sampling
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2016-02-01
LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.
NASA Astrophysics Data System (ADS)
Yamasaki, Tadashi; Houseman, Gregory; Hamling, Ian; Postek, Elek
2010-05-01
We have developed a new parallelized 3-D numerical code, OREGANO_VE, for the solution of the general visco-elastic problem in a rectangular block domain. The mechanical equilibrium equation is solved using the finite element method for a (non-)linear Maxwell visco-elastic rheology. Time-dependent displacement and/or traction boundary conditions can be applied. Matrix assembly is based on a tetrahedral element defined by 4 vertex nodes and 6 nodes located at the midpoints of the edges, and within which displacement is described by a quadratic interpolation function. For evaluating viscoelastic relaxation, an explicit time-stepping algorithm (Zienkiewicz and Cormeau, Int. J. Num. Meth. Eng., 8, 821-845, 1974) is employed. We test the accurate implementation of the OREGANO_VE by comparing numerical and analytic (or semi-analytic half-space) solutions to different problems in a range of applications: (1) equilibration of stress in a constant density layer after gravity is switched on at t = 0 tests the implementation of spatially variable viscosity and non-Newtonian viscosity; (2) displacement of the welded interface between two blocks of differing viscosity tests the implementation of viscosity discontinuities, (3) displacement of the upper surface of a layer under applied normal load tests the implementation of time-dependent surface tractions (4) visco-elastic response to dyke intrusion (compared with the solution in a half-space) tests the implementation of all aspects. In each case, the accuracy of the code is validated subject to use of a sufficiently small time step, providing assurance that the OREGANO_VE code can be applied to a range of visco-elastic relaxation processes in three dimensions, including post-seismic deformation and post-glacial uplift. The OREGANO_VE code includes a capability for representation of prescribed fault slip on an internal fault. The surface displacement associated with large earthquakes can be detected by some geodetic observations
Generalized Fibonacci photon sieves.
Ke, Jie; Zhang, Junyong
2015-08-20
We successfully extend the standard Fibonacci zone plates with two on-axis foci to the generalized Fibonacci photon sieves (GFiPS) with multiple on-axis foci. We also propose the direct and inverse design methods based on the characteristic roots of the recursion relation of the generalized Fibonacci sequences. By switching the transparent and opaque zones, according to the generalized Fibonacci sequences, we not only realize adjustable multifocal distances but also fulfill the adjustable compression ratio of focal spots in different directions. PMID:26368763
Analytical results for resonance and runup in piecewise linear bathymetries
NASA Astrophysics Data System (ADS)
Fuentes, Mauricio; Riquelme, Sebastián; Ruiz, Javier; Campos, Jaime
2015-04-01
A general method of solution for the runup evolution and some analytical results concerning a more general bathymetry than a canonical sloping beach model are presented. We studied theoretically the water wave elevation and runup generated on a continuous piecewise linear bathymetry, by solving analytically the linear shallow water wave equations in the 1+1 dimensional case. Non-horizontal linear segments are assumed and we develop an specific matrix propagator scheme, similar to the ones used in the propagation of elastic seismic wave fields in layered media, to obtain an exact integral form for the runup. A general closed expression for the maximum runup was computed analytically via the Cauchy's residue Theorem for an incident solitary wave and isosceles leading-depression N-wave in the case of n+1 linear segments. It is already known that maximum run-up strongly depends only on the closest slope to the shore, although this has not been mathematically demonstrated yet for arbitraries bathymetries. Analytical and numerical verifications were done to check the validity of the asymptotic maximum runup and we provided the mathematical and bathymetrical conditions that must be satisfied by the model to obtain correct asymptotic solutions. We applied our model to study the runup evolution on a more realistic bathymetry than a canonical sloping beach model. The seabed in a Chilean subduction zone was approximated - from the trench to the shore - by two linear segments adjusting the continental slope and shelf. Assuming an incident solitary wave, the two linear segment bathymetry generates a larger runup than the simple sloping beach model. We also discussed about the differences in the runup evolution computed numerically from incident leading-depression and -elevation isosceles N-waves. In the latter case, the water elevation at the shore shows a symmetrical behavior in terms of theirs waveforms. Finally, we applied our solution to study the resonance effects due to
Remote control for anode-cathode adjustment
Roose, Lars D.
1991-01-01
An apparatus for remotely adjusting the anode-cathode gap in a pulse power machine has an electric motor located within a hollow cathode inside the vacuum chamber of the pulse power machine. Input information for controlling the motor for adjusting the anode-cathode gap is fed into the apparatus using optical waveguides. The motor, controlled by the input information, drives a worm gear that moves a cathode tip. When the motor drives in one rotational direction, the cathode is moved toward the anode and the size of the anode-cathode gap is diminished. When the motor drives in the other direction, the cathode is moved away from the anode and the size of the anode-cathode gap is increased. The motor is powered by batteries housed in the hollow cathode. The batteries may be rechargeable, and they may be recharged by a photovoltaic cell in combination with an optical waveguide that receives recharging energy from outside the hollow cathode. Alternatively, the anode-cathode gap can be remotely adjusted by a manually-turned handle connected to mechanical linkage which is connected to a jack assembly. The jack assembly converts rotational motion of the handle and mechanical linkage to linear motion of the cathode moving toward or away from the anode.
Code of Federal Regulations, 2014 CFR
2014-07-01
... document fraud are addressed in 28 CFR 68.52. ... Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL MONETARY PENALTIES INFLATION ADJUSTMENT § 85.1 In general. (a) In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act...
Code of Federal Regulations, 2012 CFR
2012-07-01
... document fraud are addressed in 28 CFR 68.52. ... Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL MONETARY PENALTIES INFLATION ADJUSTMENT § 85.1 In general. (a) In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act...
Code of Federal Regulations, 2010 CFR
2010-07-01
... document fraud are addressed in 28 CFR 68.52. ... Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL MONETARY PENALTIES INFLATION ADJUSTMENT § 85.1 In general. (a) In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act...
Code of Federal Regulations, 2011 CFR
2011-07-01
... document fraud are addressed in 28 CFR 68.52. ... Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL MONETARY PENALTIES INFLATION ADJUSTMENT § 85.1 In general. (a) In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act...
Code of Federal Regulations, 2013 CFR
2013-07-01
... document fraud are addressed in 28 CFR 68.52. ... Administration DEPARTMENT OF JUSTICE (CONTINUED) CIVIL MONETARY PENALTIES INFLATION ADJUSTMENT § 85.1 In general. (a) In accordance with the requirements of the Federal Civil Penalties Inflation Adjustment Act...
Adjusting to change: linking family structure transitions with parenting and boys' adjustment.
Martinez, Charles R; Forgatch, Marion S
2002-06-01
This study examined links between family structure transitions and children's academic, behavioral, and emotional outcomes in a sample of 238 divorcing mothers and their sons in Grades 1-3. Multiple methods and agents were used in assessing family process variables and child outcomes. Findings suggest that greater accumulations of family transitions were associated with poorer academic functioning, greater acting-out behavior, and worse emotional adjustment for boys. However, in all three cases, these relationships were mediated by parenting practices: Parental academic skill encouragement mediated the relationship between transitions and academic functioning, and a factor of more general effective parenting practices mediated the relationships between transitions and acting out and emotional adjustment.
Mood Adjustment via Mass Communication.
ERIC Educational Resources Information Center
Knobloch, Silvia
2003-01-01
Proposes and experimentally tests mood adjustment approach, complementing mood management theory. Discusses how results regarding self-exposure across time show that patterns of popular music listening among a group of undergraduate students differ with initial mood and anticipation, lending support to mood adjustment hypotheses. Describes how…
Spousal Adjustment to Myocardial Infarction.
ERIC Educational Resources Information Center
Ziglar, Elisa J.
This paper reviews the literature on the stresses and coping strategies of spouses of patients with myocardial infarction (MI). It attempts to identify specific problem areas of adjustment for the spouse and to explore the effects of spousal adjustment on patient recovery. Chapter one provides an overview of the importance in examining the…
Feedback Systems for Linear Colliders
1999-04-12
Feedback systems are essential for stable operation of a linear collider, providing a cost-effective method for relaxing tight tolerances. In the Stanford Linear Collider (SLC), feedback controls beam parameters such as trajectory, energy, and intensity throughout the accelerator. A novel dithering optimization system which adjusts final focus parameters to maximize luminosity contributed to achieving record performance in the 1997-98 run. Performance limitations of the steering feedback have been investigated, and improvements have been made. For the Next Linear Collider (NLC), extensive feedback systems are planned as an integral part of the design. Feedback requirements for JLC (the Japanese Linear Collider) are essentially identical to NLC; some of the TESLA requirements are similar but there are significant differences. For NLC, algorithms which incorporate improvements upon the SLC implementation are being prototyped. Specialized systems for the damping rings, rf and interaction point will operate at high bandwidth and fast response. To correct for the motion of individual bunches within a train, both feedforward and feedback systems are planned. SLC experience has shown that feedback systems are an invaluable operational tool for decoupling systems, allowing precision tuning, and providing pulse-to-pulse diagnostics. Feedback systems for the NLC will incorporate the key SLC features and the benefits of advancing technologies.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-16
... Employment and Training Administration General Dynamics Itronix Corporation, a Subsidiary of General Dynamics... Adjustment Assistance (TAA) applicable to workers and former workers of General Dynamics Itronix Corporation, a subsidiary of General Dynamics Corporation, Sunrise, Florida. The determination was issued on...
Parental Divorce and Children's Adjustment.
Lansford, Jennifer E
2009-03-01
This article reviews the research literature on links between parental divorce and children's short-term and long-term adjustment. First, I consider evidence regarding how divorce relates to children's externalizing behaviors, internalizing problems, academic achievement, and social relationships. Second, I examine timing of the divorce, demographic characteristics, children's adjustment prior to the divorce, and stigmatization as moderators of the links between divorce and children's adjustment. Third, I examine income, interparental conflict, parenting, and parents well-being as mediators of relations between divorce and children's adjustment. Fourth, I note the caveats and limitations of the research literature. Finally, I consider notable policies related to grounds for divorce, child support, and child custody in light of how they might affect children s adjustment to their parents divorce.
Adjustment versus no adjustment when using adjustable sutures in strabismus surgery
Liebermann, Laura; Hatt, Sarah R.; Leske, David A.; Holmes, Jonathan M.
2013-01-01
Purpose To compare long-term postoperative outcomes when performing an adjustment to achieve a desired immediate postoperative alignment versus simply tying off at the desired immediate postoperative alignment when using adjustable sutures for strabismus surgery. Methods We retrospectively identified 89 consecutive patients who underwent a reoperation for horizontal strabismus using adjustable sutures and also had a 6-week and 1-year outcome examination. In each case, the intent of the surgeon was to tie off and only to adjust if the patient was not within the intended immediate postoperative range. Postoperative success was predefined based on angle of misalignment and diplopia at distance and near. Results Of the 89 patients, 53 (60%) were adjusted and 36 (40%) were tied off. Success rates were similar between patients who were simply tied off immediately after surgery and those who were adjusted. At 6 weeks, the success rate was 64% for the nonadjusted group versus 81% for the adjusted group (P = 0.09; difference of 17%; 95% CI, −2% to 36%). At 1 year, the success rate was 67% for the nonadjusted group versus 77% for the adjusted group (P = 0.3; difference of 11%; 95% CI, −8% to 30%). Conclusions Performing an adjustment to obtain a desired immediate postoperative alignment did not yield inferior long-term outcomes to those obtained by tying off to obtain that initial alignment. If patients were who were outside the desired immediate postoperative range had not been not adjusted, it is possible that their long-term outcomes would have been worse, therefore, overall, an adjustable approach may be superior to a nonadjustable approach. PMID:23415035
2006-11-17
Software that simulates and inverts electromagnetic field data for subsurface electrical properties (electrical conductivity) of geological media. The software treats data produced by a time harmonic source field excitation arising from the following antenna geometery: loops and grounded bipoles, as well as point electric and magnetic dioples. The inversion process is carried out using a non-linear conjugate gradient optimization scheme, which minimizes the misfit between field data and model data using a least squares criteria.more » The software is an upgrade from the code NLCGCS_MP ver 1.0. The upgrade includes the following components: Incorporation of new 1 D field sourcing routines to more accurately simulate the 3D electromagnetic field for arbitrary geologic& media, treatment for generalized finite length transmitting antenna geometry (antennas with vertical and horizontal component directions). In addition, the software has been upgraded to treat transverse anisotropy in electrical conductivity.« less
Precision Adjustable Liquid Regulator (ALR)
NASA Astrophysics Data System (ADS)
Meinhold, R.; Parker, M.
2004-10-01
A passive mechanical regulator has been developed for the control of fuel or oxidizer flow to a 450N class bipropellant engine for use on commercial and interplanetary spacecraft. There are several potential benefits to the propulsion system, depending on mission requirements and spacecraft design. This system design enables more precise control of main engine mixture ratio and inlet pressure, and simplifies the pressurization system by transferring the function of main engine flow rate control from the pressurization/propellant tank assemblies, to a single component, the ALR. This design can also reduce the thermal control requirements on the propellant tanks, avoid costly Qualification testing of biprop engines for missions with more stringent requirements, and reduce the overall propulsion system mass and power usage. In order to realize these benefits, the ALR must meet stringent design requirements. The main advantage of this regulator over other units available in the market is that it can regulate about its nominal set point to within +/-0.85%, and change its regulation set point in flight +/-4% about that nominal point. The set point change is handled actively via a stepper motor driven actuator, which converts rotary into linear motion to affect the spring preload acting on the regulator. Once adjusted to a particular set point, the actuator remains in its final position unpowered, and the regulator passively maintains outlet pressure. The very precise outlet regulation pressure is possible due to new technology developed by Moog, Inc. which reduces typical regulator mechanical hysteresis to near zero. The ALR requirements specified an outlet pressure set point range from 225 to 255 psi, and equivalent water flow rates required were in the 0.17 lb/sec range. The regulation output pressure is maintained at +/-2 psi about the set point from a P (delta or differential pressure) of 20 to over 100 psid. Maximum upstream system pressure was specified at 320 psi
Linear superposition solutions to nonlinear wave equations
NASA Astrophysics Data System (ADS)
Liu, Yu
2012-11-01
The solutions to a linear wave equation can satisfy the principle of superposition, i.e., the linear superposition of two or more known solutions is still a solution of the linear wave equation. We show in this article that many nonlinear wave equations possess exact traveling wave solutions involving hyperbolic, triangle, and exponential functions, and the suitable linear combinations of these known solutions can also constitute linear superposition solutions to some nonlinear wave equations with special structural characteristics. The linear superposition solutions to the generalized KdV equation K(2,2,1), the Oliver water wave equation, and the k(n, n) equation are given. The structure characteristic of the nonlinear wave equations having linear superposition solutions is analyzed, and the reason why the solutions with the forms of hyperbolic, triangle, and exponential functions can form the linear superposition solutions is also discussed.
A nanoscale linear-to-linear motion converter of graphene.
Dai, Chunchun; Guo, Zhengrong; Zhang, Hongwei; Chang, Tienchong
2016-08-14
Motion conversion plays an irreplaceable role in a variety of machinery. Although many macroscopic motion converters have been widely used, it remains a challenge to convert motion at the nanoscale. Here we propose a nanoscale linear-to-linear motion converter, made of a flake-substrate system of graphene, which can convert the out-of-plane motion of the substrate into the in-plane motion of the flake. The curvature gradient induced van der Waals potential gradient between the flake and the substrate provides the driving force to achieve motion conversion. The proposed motion converter may have general implications for the design of nanomachinery and nanosensors.
Adjustment disorders: the state of the art
CASEY, PATRICIA; BAILEY, SUSAN
2011-01-01
Adjustment disorders are common, yet under-researched mental disorders. The present classifications fail to provide specific diagnostic criteria and relegate them to sub-syndromal status. They also fail to provide guidance on distinguishing them from normal adaptive reactions to stress or from recognized mental disorders such as depressive episode or post-traumatic stress disorder. These gaps run the risk of pathologizing normal emotional reactions to stressful events on the one hand and on the other of overdiagnosing depressive disorder with the consequent unnecessary prescription of antidepressant treatments. Few of the structured interview schedules used in epidemiological studies incorporate adjustment disorders. They are generally regarded as mild, notwithstanding their prominence as a diagnosis in those dying by suicide and their poor prognosis when diagnosed in adolescents. There are very few intervention studies. PMID:21379346
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Li, Li; Brumback, Babette A; Weppelmann, Thomas A; Morris, J Glenn; Ali, Afsar
2016-08-15
Motivated by an investigation of the effect of surface water temperature on the presence of Vibrio cholerae in water samples collected from different fixed surface water monitoring sites in Haiti in different months, we investigated methods to adjust for unmeasured confounding due to either of the two crossed factors site and month. In the process, we extended previous methods that adjust for unmeasured confounding due to one nesting factor (such as site, which nests the water samples from different months) to the case of two crossed factors. First, we developed a conditional pseudolikelihood estimator that eliminates fixed effects for the levels of each of the crossed factors from the estimating equation. Using the theory of U-Statistics for independent but non-identically distributed vectors, we show that our estimator is consistent and asymptotically normal, but that its variance depends on the nuisance parameters and thus cannot be easily estimated. Consequently, we apply our estimator in conjunction with a permutation test, and we investigate use of the pigeonhole bootstrap and the jackknife for constructing confidence intervals. We also incorporate our estimator into a diagnostic test for a logistic mixed model with crossed random effects and no unmeasured confounding. For comparison, we investigate between-within models extended to two crossed factors. These generalized linear mixed models include covariate means for each level of each factor in order to adjust for the unmeasured confounding. We conduct simulation studies, and we apply the methods to the Haitian data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26892025
First-principles scheme for spectral adjustment in nanoscale transport
NASA Astrophysics Data System (ADS)
García-Suárez, Víctor M.; Lambert, Colin J.
2011-05-01
We implement a general method for correcting the low-bias transport properties of nanoscale systems within an ab initio methodology based on linear combinations of atomic orbitals. This method consists of adjusting the molecular spectrum, i.e. shifting the position of the occupied and unoccupied molecular orbitals to match the experimental highest occupied molecular orbital-lowest unoccupied molecular orbital (HOMO-LUMO (HL)) gap. Thus we show how the typical problem of an underestimated HL gap can be corrected, leading to quantitative and qualitative agreement with experiments. We show that an alternative method based on calculating the position of the relevant transport resonances and fitting them to Lorentzians can significantly underestimate the conductance and does not accurately reproduce the electron transmission coefficient between resonances. We compare this simple method in an ideal system of a benzene molecule coupled to featureless leads to more sophisticated approaches, such as GW, and find rather good agreement between both. We also present the results of a benzenedithiolate molecule between gold leads, where we study different coupling configurations for straight and tilted molecules, and show that this method yields the observed evolution of two-dimensional conductance histograms. We also explain the presence of low-conductance zones in such histograms by taking into account different coupling configurations.
Adjustable Induction-Heating Coil
NASA Technical Reports Server (NTRS)
Ellis, Rod; Bartolotta, Paul
1990-01-01
Improved design for induction-heating work coil facilitates optimization of heating in different metal specimens. Three segments adjusted independently to obtain desired distribution of temperature. Reduces time needed to achieve required temperature profiles.
Time-adjusted variable resistor
NASA Technical Reports Server (NTRS)
Heyser, R. C.
1972-01-01
Timing mechanism was developed effecting extremely precisioned highly resistant fixed resistor. Switches shunt all or portion of resistor; effective resistance is varied over time interval by adjusting switch closure rate.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-22
... noticing a recent Postal Service filing seeking postal rate adjustments based on exigent circumstances...,'' is ``premised on the recent recession as an exigent event.'' Id. at 1, 2. In Order No. 1059,...
41 CFR 105-71.151 - Later disallowances and adjustments.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services... GOVERNMENTS 71.15-After-the-Grant Requirements § 105-71.151 Later disallowances and adjustments. The closeout....142; (d) Property management requirements in § 105-71.131 and § 105-71.132; and (e) Audit...
41 CFR 105-71.151 - Later disallowances and adjustments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services... GOVERNMENTS 71.15-After-the-Grant Requirements § 105-71.151 Later disallowances and adjustments. The closeout....142; (d) Property management requirements in § 105-71.131 and § 105-71.132; and (e) Audit...
18 CFR 381.104 - Annual adjustment of fees.
Code of Federal Regulations, 2014 CFR
2014-04-01
... fees. 381.104 Section 381.104 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REVISED GENERAL RULES FEES General Provisions § 381.104 Annual adjustment of...(a), the fee for the first year will be $1,000. The formula for the fee in future years will be...
18 CFR 381.104 - Annual adjustment of fees.
Code of Federal Regulations, 2011 CFR
2011-04-01
... fees. 381.104 Section 381.104 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REVISED GENERAL RULES FEES General Provisions § 381.104 Annual adjustment of...(a), the fee for the first year will be $1,000. The formula for the fee in future years will be...
ERIC Educational Resources Information Center
Preece, Peter F. W.
1982-01-01
The validity of various reliability-corrected procedures for adjusting for initial differences between groups in uncontrolled studies is established for subjects exhibiting linear fan-spread growth. The results are then extended to a nonlinear model of growth. (Author)
17 CFR 143.8 - Inflation-adjusted civil monetary penalties.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Inflation-adjusted civil... JURISDICTION General Provisions § 143.8 Inflation-adjusted civil monetary penalties. (a) Unless otherwise amended by an act of Congress, the inflation-adjusted maximum civil monetary penalty for each violation...
17 CFR 143.8 - Inflation-adjusted civil monetary penalties.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Inflation-adjusted civil... JURISDICTION General Provisions § 143.8 Inflation-adjusted civil monetary penalties. (a) Unless otherwise amended by an act of Congress, the inflation-adjusted maximum civil monetary penalty for each violation...
17 CFR 143.8 - Inflation-adjusted civil monetary penalties.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 17 Commodity and Securities Exchanges 1 2013-04-01 2013-04-01 false Inflation-adjusted civil... JURISDICTION General Provisions § 143.8 Inflation-adjusted civil monetary penalties. (a) Unless otherwise amended by an act of Congress, the inflation-adjusted maximum civil monetary penalty for each violation...
17 CFR 143.8 - Inflation-adjusted civil monetary penalties.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 17 Commodity and Securities Exchanges 2 2014-04-01 2014-04-01 false Inflation-adjusted civil... COMMISSION'S JURISDICTION General Provisions § 143.8 Inflation-adjusted civil monetary penalties. (a) Unless otherwise amended by an act of Congress, the inflation-adjusted maximum civil monetary penalty for...
17 CFR 143.8 - Inflation-adjusted civil monetary penalties.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Inflation-adjusted civil... JURISDICTION General Provisions § 143.8 Inflation-adjusted civil monetary penalties. (a) Unless otherwise amended by an act of Congress, the inflation-adjusted maximum civil monetary penalty for each violation...
Hsu, Day-Shin; Chou, Yu-Yu; Tung, Yen-Shih; Liao, Chun-Chen
2010-03-01
An efficient and short entry to polyfunctionalized linear triquinanes from 2-methoxyphenols is described by utilizing the following chemistry. The Diels-Alder reactions of masked o-benzoquinones, derived from 2-methoxyphenols, with cyclopentadiene afford tricyclo[5.2.2.0(2,6)]undeca-4,10-dien-8-ones. Photochemical oxa-di-pi-methane (ODPM) rearrangements and 1,3-acyl shifts of the Diels-Alder adducts are investigated. The ODPM-rearranged products are further converted to linear triquinanes by using an O-stannyl ketyl fragmentation. Application of this efficient strategy to the total synthesis of (+/-)-Delta(9(12))-capnellene was accomplished from 2-methoxy-4-methylphenol in nine steps with 20 % overall yield.
Zhang, Jingyu; Mandl, Heinz; Wang, Erping
2010-10-01
The effect of personality traits and acculturation variables on crosscultural adjustment were investigated in 139 Chinese students in Germany (52% girls; M age = 25.3 yr., SD = 2.9). Participants were surveyed by house visits to their dormitories. Several scales were administered: (a) Big Five Inventory; (b) Vancouver Index of Acculturation; (c) sociocultural adjustment, general and academic; and (d) psychological adjustment, i.e., depression, self-esteem, and life satisfaction. Results showed that Neuroticism and Openness were two shared predictors of sociocultural adjustment. Agreeableness and mainstream acculturation were only related to general adjustment, while Conscientiousness was only related to academic adjustment. All facets of psychological adjustment were related to Neuroticism and Consciousness, while positive components (self-esteem and life satisfaction) were also related to Extraversion and Openness. No influence of heritage acculturation was found. The findings are discussed in light of measurement issues and the shared and unique individual predictors of the different facets of adjustment. PMID:21117478
Risk adjustment: where are we now?
Newhouse, J P
1998-01-01
Risk adjustment is intended to minimize selection of patients or enrollees in health plans. Current efforts generally are recognized as inadequate, but improvement is difficult. The greatest short-term gain will come from introducing diagnostic information, though outpatient diagnosis data are unreliable. Initial efforts may use inpatient data, but this creates incentives to hospitalize people. Even exploiting diagnosis information leaves substantial imperfections. Partial capitation, common in behavioral health, reduces incentives to select patients and stent on services, but current policy resists it, perhaps because policymakers misinterpret the lesson of the Prospective Payment System. Theoretically, not paying plans more for providing additional services is optimal only if consumers are well informed.
Risk adjustment: where are we now?
Newhouse, J P
1998-01-01
Risk adjustment is intended to minimize selection of patients or enrollees in health plans. Current efforts generally are recognized as inadequate, but improvement is difficult. The greatest short-term gain will come from introducing diagnostic information, though outpatient diagnosis data are unreliable. Initial efforts may use inpatient data, but this creates incentives to hospitalize people. Even exploiting diagnosis information leaves substantial imperfections. Partial capitation, common in behavioral health, reduces incentives to select patients and stent on services, but current policy resists it, perhaps because policymakers misinterpret the lesson of the Prospective Payment System. Theoretically, not paying plans more for providing additional services is optimal only if consumers are well informed. PMID:9719781
Permanent multipole magnets with adjustable strength
Halbach, K.
1983-03-01
Preceded by a short discussion of the motives for using permanent magnets in accelerators, a new type of permanent magnet for use in accelerators is presented. The basic design and most important properties of a quadrupole will be described that uses both steel and permanent magnet material. The field gradient produced by this magnet can be adjusted without changing any other aspect of the field produced by this quadrupole. The generalization of this concept to produce other multipole fields, or combination of multipole fields, will also be presented.
Belos Block Linear Solvers Package
2004-03-01
Belos is an extensible and interoperable framework for large-scale, iterative methods for solving systems of linear equations with multiple right-hand sides. The motivation for this framework is to provide a generic interface to a collection of algorithms for solving large-scale linear systems. Belos is interoperable because both the matrix and vectors are considered to be opaque objects--only knowledge of the matrix and vectors via elementary operations is necessary. An implementation of Balos is accomplished viamore » the use of interfaces. One of the goals of Belos is to allow the user flexibility in specifying the data representation for the matrix and vectors and so leverage any existing software investment. The algorithms that will be included in package are Krylov-based linear solvers, like Block GMRES (Generalized Minimal RESidual) and Block CG (Conjugate-Gradient).« less
Injunctive and Descriptive Peer Group Norms and the Academic Adjustment of Rural Early Adolescents
ERIC Educational Resources Information Center
Hamm, Jill V.; Schmid, Lorrie; Farmer, Thomas W.; Locke, Belinda
2011-01-01
This study integrates diverse literatures on peer group influence by conceptualizing and examining the relationship of peer group injunctive norms to the academic adjustment of a large and ethnically diverse sample of rural early adolescents' academic adjustment. Results of three-level hierarchical linear modeling indicated that peer groups were…
12 CFR 1209.80 - Inflation adjustments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Inflation adjustments. 1209.80 Section 1209.80... PROCEDURE Civil Money Penalty Inflation Adjustments § 1209.80 Inflation adjustments. The maximum amount of... thereafter adjusted in accordance with the Inflation Adjustment Act, on a recurring four-year cycle, is...
12 CFR 1209.80 - Inflation adjustments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Inflation adjustments. 1209.80 Section 1209.80... PROCEDURE Civil Money Penalty Inflation Adjustments § 1209.80 Inflation adjustments. The maximum amount of... thereafter adjusted in accordance with the Inflation Adjustment Act, on a recurring four-year cycle, is...
12 CFR 1209.80 - Inflation adjustments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Inflation adjustments. 1209.80 Section 1209.80... PROCEDURE Civil Money Penalty Inflation Adjustments § 1209.80 Inflation adjustments. The maximum amount of... thereafter adjusted in accordance with the Inflation Adjustment Act, on a recurring four-year cycle, is...
Elliptically polarizing adjustable phase insertion device
Carr, R.
1995-01-17
An insertion device for extracting polarized electromagnetic energy from a beam of particles is disclosed. The insertion device includes four linear arrays of magnets which are aligned with the particle beam. The magnetic field strength to which the particles are subjected is adjusted by altering the relative alignment of the arrays in a direction parallel to that of the particle beam. Both the energy and polarization of the extracted energy may be varied by moving the relevant arrays parallel to the beam direction. The present invention requires a substantially simpler and more economical superstructure than insertion devices in which the magnetic field strength is altered by changing the gap between arrays of magnets. 3 figures.
Linear Logistic Test Modeling with R
ERIC Educational Resources Information Center
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Linear pose estimation from points or lines
NASA Technical Reports Server (NTRS)
Ansar, A.; Daniilidis, K.
2002-01-01
We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We present a number of simulations which compare our results to two other recent linear algorithm as well as to iterative approaches.
Electrothermal linear actuator
NASA Technical Reports Server (NTRS)
Derr, L. J.; Tobias, R. A.
1969-01-01
Converting electric power into powerful linear thrust without generation of magnetic fields is accomplished with an electrothermal linear actuator. When treated by an energized filament, a stack of bimetallic washers expands and drives the end of the shaft upward.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
ADJUSTED FIELD PROFILE FOR THE CHROMATICITY CANCELLATION IN FFAG ACCELERATORS.
RUGGIERO, A.G.
2004-10-13
In an earlier report they have reviewed four major rules to design the lattice of Fixed-Field Alternating-Gradient (FFAG) accelerators. One of these rules deals with the search of the Adjusted Field Profile, that is the field non-linear distribution along the length and the width of the accelerator magnets, to compensate for the chromatic behavior, and thus to reduce considerably the variation of betatron tunes during acceleration over a large momentum range. The present report defines the method for the search of the Adjusted Field Profile.
Pulse shape adjustment for the SLC damping ring kickers
Mattison, T.; Cassel, R.; Donaldson, A.; Fischer, H.; Gough, D.
1991-05-01
The difficulties with damping ring kickers that prevented operation of the SLAC Linear Collider in full multiple bunch mode have been overcome by shaping the current pulse to compensate for imperfections in the magnets. The risetime was improved by a peaking capacitor, with a tunable inductor to provide a locally flat pulse. The pulse was flattened by an adjustable droop inductor. Fine adjustment was provided by pulse forming line tuners driven by stepping motors. Further risetime improvement will be obtained by a saturating ferrite pulse sharpener. 4 refs., 3 figs.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Simulation of a medical linear accelerator for teaching purposes.
Anderson, Rhys; Lamey, Michael; MacPherson, Miller; Carlone, Marco
2015-01-01
Simulation software for medical linear accelerators that can be used in a teaching environment was developed. The components of linear accelerators were modeled to first order accuracy using analytical expressions taken from the literature. The expressions used constants that were empirically set such that realistic response could be expected. These expressions were programmed in a MATLAB environment with a graphical user interface in order to produce an environment similar to that of linear accelerator service mode. The program was evaluated in a systematic fashion, where parameters affecting the clinical properties of medical linear accelerator beams were adjusted independently, and the effects on beam energy and dose rate recorded. These results confirmed that beam tuning adjustments could be simulated in a simple environment. Further, adjustment of service parameters over a large range was possible, and this allows the demonstration of linear accelerator physics in an environment accessible to both medical physicists and linear accelerator service engineers. In conclusion, a software tool, named SIMAC, was developed to improve the teaching of linear accelerator physics in a simulated environment. SIMAC performed in a similar manner to medical linear accelerators. The authors hope that this tool will be valuable as a teaching tool for medical physicists and linear accelerator service engineers.
MCCB warm adjustment testing concept
NASA Astrophysics Data System (ADS)
Erdei, Z.; Horgos, M.; Grib, A.; Preradović, D. M.; Rodic, V.
2016-08-01
This paper presents an experimental investigation in to operating of thermal protection device behavior from an MCCB (Molded Case Circuit Breaker). One of the main functions of the circuit breaker is to assure protection for the circuits where mounted in for possible overloads of the circuit. The tripping mechanism for the overload protection is based on a bimetal movement during a specific time frame. This movement needs to be controlled and as a solution to control this movement we choose the warm adjustment concept. This concept is meant to improve process capability control and final output. The warm adjustment device design will create a unique adjustment of the bimetal position for each individual breaker, determined when the testing current will flow thru a phase which needs to trip in a certain amount of time. This time is predetermined due to scientific calculation for all standard types of amperages and complies with the IEC 60497 standard requirements.
A terabyte linear tape recorder
NASA Technical Reports Server (NTRS)
Webber, John C.
1994-01-01
A plan has been formulated and selected for a NASA Phase 2 SBIR award for using the VLBA tape recorder for recording general data. The VLBA tape recorder is a high-speed, high-density linear tape recorder developed for Very Long Baseline Interferometry (VLBI) which is presently capable of recording at rates up to 2 Gbit/sec and holding up to 1 Terabyte of data on one tape, using a special interface and not employing error correction. A general-purpose interface and error correction will be added so that the recorder can be used in other high-speed, high-capacity applications.
Generalized Fibonacci photon sieves
NASA Astrophysics Data System (ADS)
Ke, Jie; Zhang, Junyong
2015-08-01
We propose a family of zone plates which are produced by the generalized Fibonacci sequences and their axial focusing properties are analyzed in detail. Compared with traditional Fresnel zone plates, the generalized Fibonacci zone plates present two axial foci with equal intensity. Besides, we propose an approach to adjust the axial locations of the two foci by means of different optical path difference, and further give the deterministic ratio of the two focal distances which attributes to their own generalized Fibonacci sequences. The generalized Fibonacci zone plates may allow for new applications in micro and nanophotonics.
Communications circuit including a linear quadratic estimator
Ferguson, Dennis D.
2015-07-07
A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.
Adjustable mount for electro-optic transducers in an evacuated cryogenic system
NASA Technical Reports Server (NTRS)
Crossley, Edward A., Jr. (Inventor); Haynes, David P. (Inventor); Jones, Howard C. (Inventor); Jones, Irby W. (Inventor)
1987-01-01
The invention is an adjustable mount for positioning an electro-optic transducer in an evacuated cryogenic environment. Electro-optic transducers are used in this manner as high sensitivity detectors of gas emission lines of spectroscopic analysis. The mount is made up of an adjusting mechanism and a transducer mount. The adjusting mechanism provided five degrees of freedom, linear adjustments and angular adjustments. The mount allows the use of an internal lens to focus energy on the transducer element thereby improving the efficiency of the detection device. Further, the transducer mount, although attached to the adjusting mechanism, is isolated thermally such that a cryogenic environment can be maintained at the transducer while the adjusting mechanism remains at room temperature. Radiation shields also are incorporated to further reduce heat flow to the transducer location.
On Solving Non-Autonomous Linear Difference Equations with Applications
ERIC Educational Resources Information Center
Abu-Saris, Raghib M.
2006-01-01
An explicit formula is established for the general solution of the homogeneous non-autonomous linear difference equation. The formula developed is then used to characterize globally periodic linear difference equations with constant coefficients.
Church, R M; Gibbon, J
1982-04-01
Responses of 26 rats were reinforced following a signal of a certain duration, but not following signals of shorter or longer durations. This led to a positive temporal generalization gradient with a maximum at the reinforced duration in six experiments. Spacing of the nonreinforced signals did not influence the gradient, but the location of the maximum and breadth of the gradient increased with the duration of the reinforced signal. Reduction of reinforcement, either by partial reinforcement or reduction in the probability of a positive signal, led to a decrease in the height of the generalization gradient. There were large, reliable individual differences in the height and breadth of the generalization gradient. When the conditions of reinforcement were reversed (responses reinforced following all signals longer or shorter than a single nonreinforced duration), eight additional rats had a negative generalization gradient with a minimum at a signal duration shorter than the single nonreinforced duration. A scalar timing theory is described that provided a quantitative fit of the data. This theory involved a clock that times in linear units with an accurate mean and a negligible variance, a distribution of memory times that is normally distributed with an accurate mean and a scalar standard deviation, and a rule to respond if the clock is "close enough" to a sample of the memory time distribution. This decision is based on a ratio of the discrepancy between the clock time and the remembered time, to the remembered time. When this ratio is below a (variable) threshold, subjects respond. When three timing parameters--coefficient of variation of the memory time, the mean and the standard deviation of the threshold--were set at their median values, a theory with two free parameters accounted for 96% of the variance. The two parameters reflect the probability of attention to time and the probability of a response given inattention. These parameters were not influenced
Economic Pressures and Family Adjustment.
ERIC Educational Resources Information Center
Haccoun, Dorothy Markiewicz; Ledingham, Jane E.
The relationships between economic stress on the family and child and parental adjustment were examined for a sample of 199 girls and boys in grades one, four, and seven. These associations were examined separately for families in which both parents were present and in which mothers only were at home. Economic stress was associated with boys'…
21 CFR 880.5100 - AC-powered adjustable hospital bed.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false AC-powered adjustable hospital bed. 880.5100... (CONTINUED) MEDICAL DEVICES GENERAL HOSPITAL AND PERSONAL USE DEVICES General Hospital and Personal Use Therapeutic Devices § 880.5100 AC-powered adjustable hospital bed. (a) Identification. An...
18 CFR 381.304 - Review of Department of Energy denial of adjustment.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Energy denial of adjustment. 381.304 Section 381.304 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REVISED GENERAL RULES FEES Fees Applicable to General Activities § 381.304 Review of Department of Energy denial of adjustment. Link to an amendment published...
Adjustment Issues Affecting Employment for Immigrants from the Former Soviet Union.
ERIC Educational Resources Information Center
Yost, Anastasia Dimun; Lucas, Margaretha S.
2002-01-01
Describes major issues, including culture shock and loss of status, that affect general adjustment of immigrants and refugees from the former Soviet Union who are resettling in the United States. Issues that affect career and employment adjustment are described and the interrelatedness of general and career issues is explored. (Contains 39…
Performance of An Adjustable Strength Permanent Magnet Quadrupole
Gottschalk, S.C.; DeHart, T.E.; Kangas, K.W.; Spencer, C.M.; Volk, J.T.; /Fermilab
2006-03-01
An adjustable strength permanent magnet quadrupole suitable for use in Next Linear Collider has been built and tested. The pole length is 42cm, aperture diameter 13mm, peak pole tip strength 1.03Tesla and peak integrated gradient * length (GL) is 68.7 Tesla. This paper describes measurements of strength, magnetic CL and field quality made using an air bearing rotating coil system. The magnetic CL stability during -20% strength adjustment proposed for beam based alignment was < 0.2 microns. Strength hysteresis was negligible. Thermal expansion of quadrupole and measurement parts caused a repeatable and easily compensated change in the vertical magnetic CL. Calibration procedures as well as CL measurements made over a wider tuning range of 100% to 20% in strength useful for a wide range of applications will be described. The impact of eddy currents in the steel poles on the magnetic field during strength adjustments will be reported.
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III (Inventor); Crossley, Edward A., Jr. (Inventor); Jones, Irby W. (Inventor); Miller, James B. (Inventor); Davis, C. Calvin (Inventor); Behun, Vaughn D. (Inventor); Goodrich, Lewis R., Sr. (Inventor)
1992-01-01
A linear mass actuator includes an upper housing and a lower housing connectable to each other and having a central passageway passing axially through a mass that is linearly movable in the central passageway. Rollers mounted in the upper and lower housings in frictional engagement with the mass translate the mass linearly in the central passageway and drive motors operatively coupled to the roller means, for rotating the rollers and driving the mass axially in the central passageway.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Code of Federal Regulations, 2014 CFR
2014-04-01
... calendar year because of an error that does not constitute a compensation adjustment as defined in... compensation adjustment as defined in paragraph (b) of this section, the employer shall adjust the error by... compensation, proper adjustments with respect to the contributions shall be made, without interest,...
Code of Federal Regulations, 2013 CFR
2013-04-01
... calendar year because of an error that does not constitute a compensation adjustment as defined in... compensation adjustment as defined in paragraph (b) of this section, the employer shall adjust the error by... compensation, proper adjustments with respect to the contributions shall be made, without interest,...
Adjusting to University: The Hong Kong Experience
ERIC Educational Resources Information Center
Yau, Hon Keung; Sun, Hongyi; Cheng, Alison Lai Fong
2012-01-01
Students' adjustment to the university environment is an important factor in predicting university outcomes and is crucial to their future achievements. University support to students' transition to university life can be divided into three dimensions: academic adjustment, social adjustment and psychological adjustment. However, these…
12 CFR 19.240 - Inflation adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...
12 CFR 19.240 - Inflation adjustments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 1 2011-01-01 2011-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...
12 CFR 19.240 - Inflation adjustments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b)...
Code of Federal Regulations, 2010 CFR
2010-10-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION SALARY OFFSET § 33.3 General rule. (a..., the individual is provided written notice of the nature and the amount of the adjustment and point of... written notice of the nature and the amount of the adjustment and a point of contact for contesting...
Digital optimeter based on linear CCD
NASA Astrophysics Data System (ADS)
Hu, Qing; Xu, Yuanling
2013-10-01
In this paper, the development of a new type of digital optimeter based on linear CCD is introduced and discussed. It is based on traditional autocollimation optical system and optical lever motion, with linear CCD as measuring element. A light band generated by slit is captured by linear CCD after passing through an autocollimation optical system. A piece of mirror placed in the optical path of this system is controlled by displacement of a measuring slide in order to adjust the light band imaging position. The displacement of light band is detected by CCD and is then displayed in digital format. Such a design successfully eliminates the existing issues of signal quality and signal overspeed in digital optimeters using grating as the measuring element. The final product based on this technique has been released, offering a resolution of 0.1μm and 0.02μm.
Linearly polarized fiber amplifier
Kliner, Dahv A.; Koplow, Jeffery P.
2004-11-30
Optically pumped rare-earth-doped polarizing fibers exhibit significantly higher gain for one linear polarization state than for the orthogonal state. Such a fiber can be used to construct a single-polarization fiber laser, amplifier, or amplified-spontaneous-emission (ASE) source without the need for additional optical components to obtain stable, linearly polarized operation.
Richter, B.
1985-12-01
A report is given on the goals and progress of the SLAC Linear Collider. The status of the machine and the detectors are discussed and an overview is given of the physics which can be done at this new facility. Some ideas on how (and why) large linear colliders of the future should be built are given.
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
NASA Technical Reports Server (NTRS)
Clancy, John P.
1988-01-01
The object of the invention is to provide a mechanical force actuator which is lightweight and manipulatable and utilizes linear motion for push or pull forces while maintaining a constant overall length. The mechanical force producing mechanism comprises a linear actuator mechanism and a linear motion shaft mounted parallel to one another. The linear motion shaft is connected to a stationary or fixed housing and to a movable housing where the movable housing is mechanically actuated through actuator mechanism by either manual means or motor means. The housings are adapted to releasably receive a variety of jaw or pulling elements adapted for clamping or prying action. The stationary housing is adapted to be pivotally mounted to permit an angular position of the housing to allow the tool to adapt to skewed interfaces. The actuator mechanisms is operated by a gear train to obtain linear motion of the actuator mechanism.
Linear models: permutation methods
Cade, B.S.; Everitt, B.S.; Howell, D.C.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
Sauer's non-linear voltage division.
Schwan, H P; McAdams, E T; Jossinet, J
2002-09-01
The non-linearity of the electrode-tissue interface impedance gives rise to harmonics and thus degrades the accuracy of impedance measurements. Also, electrodes are often driven into the non-linear range of their polarisation impedance. This is particularly true in clinical applications. Techniques to correct for electrode effects are usually based on linear electrode impedance data. However, these data can be very different from the non-linear values needed. Non-linear electrode data suggested a model based on simple assumptions. It is useful in predicting the frequency dependence of non-linear effects from linear properties. Sauer's treatment is a first attempt to provide a more general and rigorous basis for modelling the non-linear state. The paper reports Sauer's treatment of the non-linear case and points out its limitations. The paper considers Sauer's treatment of a series arrangement of two impedances. The tissue impedance is represented by a linear voltage-current characteristic. The interface impedance is represented by a Volterra expansion. The response of this network to periodic signals is calculated up to the second-order term of the series expansion. The resultant, time-dependent current is found to contain a DC term (rectification), as well as frequency-dependent terms. Sauer's treatment assumes a voltage clamp across the impedances and neglects higher-order terms in the series expansion. As a consequence, it fails adequately to represent some experimentally observed phenomena. It is therefore suggested that Sauer's expressions for the voltage divider should be combined with the non-linear treatments previously published by the co-authors. Although Sauer's work on the non-linear voltage divider was originally applied to the study of the non-linear behaviour of the electrode-electrolyte interface and biological tissues, it is stressed, however, that the work is applicable to a wide range of research areas.
45 CFR 92.51 - Later disallowances and adjustments.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Later disallowances and adjustments. 92.51 Section 92.51 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION UNIFORM... requirements in §§ 92.31 and 92.32; and (e) Audit requirements in § 92.26....
45 CFR 92.51 - Later disallowances and adjustments.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Later disallowances and adjustments. 92.51 Section 92.51 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION UNIFORM... requirements in §§ 92.31 and 92.32; and (e) Audit requirements in § 92.26....
45 CFR 92.51 - Later disallowances and adjustments.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Later disallowances and adjustments. 92.51 Section 92.51 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION UNIFORM... requirements in §§ 92.31 and 92.32; and (e) Audit requirements in § 92.26....
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., Commitment, and Endorsement Generally Applicable to Multifamily and Health Care Facility Mortgage Insurance Programs; and Continuing Eligibility Requirements for Existing Projects Cost Certification § 200.97... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Adjustments resulting from...
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., Commitment, and Endorsement Generally Applicable to Multifamily and Health Care Facility Mortgage Insurance Programs; and Continuing Eligibility Requirements for Existing Projects Cost Certification § 200.97... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Adjustments resulting from...
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., Commitment, and Endorsement Generally Applicable to Multifamily and Health Care Facility Mortgage Insurance Programs; and Continuing Eligibility Requirements for Existing Projects Cost Certification § 200.97... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Adjustments resulting from...
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., Commitment, and Endorsement Generally Applicable to Multifamily and Health Care Facility Mortgage Insurance Programs; and Continuing Eligibility Requirements for Existing Projects Cost Certification § 200.97... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Adjustments resulting from...
24 CFR 200.97 - Adjustments resulting from cost certification.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., Commitment, and Endorsement Generally Applicable to Multifamily and Health Care Facility Mortgage Insurance Programs; and Continuing Eligibility Requirements for Existing Projects Cost Certification § 200.97... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Adjustments resulting from...
36 CFR 1207.51 - Later disallowances and adjustments.
Code of Federal Regulations, 2010 CFR
2010-07-01
....42; (d) Property management requirements in §§ 1207.31 and 1207.32; and (e) Audit requirements in... ADMINISTRATION GENERAL RULES UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND COOPERATIVE AGREEMENTS TO STATE AND LOCAL GOVERNMENTS After-The-Grant Requirements § 1207.51 Later disallowances and adjustments....
Exploring the Adjustment Problems among International Graduate Students in Hawaii
ERIC Educational Resources Information Center
Yang, Stephanie; Salzman, Michael; Yang, Cheng-Hong
2015-01-01
Due to the advance of technology, the American society has become more diverse. A huge population of international students in the U.S. faces unique issues. According to the existing literature, the top-rated anxieties international student faces are generally caused by language anxiety, cultural adjustments, and learning differences and barriers.…
Effective Report Writing in Vocational Evaluation and Work Adjustment Programs.
ERIC Educational Resources Information Center
Esser, Thomas J.
The document serves as a guideline to report writing for vocational evaluation and work adjustment programs, providing general principles for content with the aim of developing some uniformity in report organization. Common problems in report writing are described from the reader's and writer's perspective. Basic principles are listed which should…
20 CFR 229.51 - Adjustment of age reduction.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee...
20 CFR 229.51 - Adjustment of age reduction.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee...
20 CFR 229.51 - Adjustment of age reduction.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee...
20 CFR 229.51 - Adjustment of age reduction.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Adjustment of age reduction. 229.51 Section... age reduction. (a) General. If an age reduced employee or spouse overall minimum benefit is not paid for certain months before the employee or spouse attains retirement age, or the employee...
Religion and Spirituality in Adjustment Following Bereavement: An Integrative Review
ERIC Educational Resources Information Center
Wortmann, Jennifer H.; Park, Crystal L.
2008-01-01
Surprisingly little research has examined the widely held assumption that religion and spirituality are generally helpful in adjusting to bereavement. A systematic literature search located 73 empirical articles that examined religion/spirituality in the context of bereavement. The authors describe the multidimensional nature of…
New Student Supports, Problems and Perceptions in Initial Adjustment.
ERIC Educational Resources Information Center
Stoughton, Darla; Wanchick, Jean
This study sought to evaluate the impact of orientation, general levels of adjustment, differences between orientation attending and non-attending students, and differences between faculty and student academic performance evaluations for freshmen at Slippery Rock University in Pennsylvania during the crucial first six weeks on campus. From a pool…
24 CFR 200.16 - Project mortgage adjustments and reductions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Project mortgage adjustments and reductions. 200.16 Section 200.16 Housing and Urban Development Regulations Relating to Housing and Urban... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Requirements for...
22 CFR 1423.11 - Settlement or adjustment of issues.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Settlement or adjustment of issues. 1423.11 Section 1423.11 Foreign Relations FOREIGN SERVICE LABOR RELATIONS BOARD; FEDERAL LABOR RELATIONS AUTHORITY; GENERAL COUNSEL OF THE FEDERAL LABOR RELATIONS AUTHORITY; AND THE FOREIGN SERVICE IMPASSE DISPUTES...
A Navigational Analysis of Linear and Non-Linear Hypermedia Interfaces.
ERIC Educational Resources Information Center
Hall, Richard H.; Balestra, Joel; Davis, Miles
The purpose of this experiment was to assess the effectiveness of a comprehensive model for the analysis of hypermap navigation patterns through a comparison of navigation patterns associated with a traditional linear interface versus a non-linear "hypermap" interface. Twenty-six general psychology university students studied material on bipolar…
Adjustable link for kinematic mounting systems
Hale, Layton C.
1997-01-01
An adjustable link for kinematic mounting systems. The adjustable link is a low-cost, passive device that provides backlash-free adjustment along its single constraint direction and flexural freedom in all other directions. The adjustable link comprises two spheres, two sockets in which the spheres are adjustable retain, and a connection link threadly connected at each end to the spheres, to provide a single direction of restraint and to adjust the length or distance between the sockets. Six such adjustable links provide for six degrees of freedom for mounting an instrument on a support. The adjustable link has applications in any machine or instrument requiring precision adjustment in six degrees of freedom, isolation from deformations of the supporting platform, and/or additional structural damping. The damping is accomplished by using a hollow connection link that contains an inner rod and a viscoelastic separation layer between the two.
Adjustable link for kinematic mounting systems
Hale, L.C.
1997-07-01
An adjustable link for kinematic mounting systems is disclosed. The adjustable link is a low-cost, passive device that provides backlash-free adjustment along its single constraint direction and flexural freedom in all other directions. The adjustable link comprises two spheres, two sockets in which the spheres are adjustable retain, and a connection link threadly connected at each end to the spheres, to provide a single direction of restraint and to adjust the length or distance between the sockets. Six such adjustable links provide for six degrees of freedom for mounting an instrument on a support. The adjustable link has applications in any machine or instrument requiring precision adjustment in six degrees of freedom, isolation from deformations of the supporting platform, and/or additional structural damping. The damping is accomplished by using a hollow connection link that contains an inner rod and a viscoelastic separation layer between the two. 3 figs.
NASA Technical Reports Server (NTRS)
Studer, P. A. (Inventor)
1983-01-01
A linear magnetic bearing system having electromagnetic vernier flux paths in shunt relation with permanent magnets, so that the vernier flux does not traverse the permanent magnet, is described. Novelty is believed to reside in providing a linear magnetic bearing having electromagnetic flux paths that bypass high reluctance permanent magnets. Particular novelty is believed to reside in providing a linear magnetic bearing with a pair of axially spaced elements having electromagnets for establishing vernier x and y axis control. The magnetic bearing system has possible use in connection with a long life reciprocating cryogenic refrigerator that may be used on the space shuttle.
High Technology and General Education.
ERIC Educational Resources Information Center
Owen, H. James
1984-01-01
Considers the general education component of technical programs as a means of helping adults adjust to today's industrial changes in processes, machines, and management and thereby to hold jobs, move into new ones, and change careers. Reviews the literature on the general education needs of students in high technology programs and models for…
Adjustable extender for instrument module
Sevec, J.B.; Stein, A.D.
1975-11-01
A blank extender module used to mount an instrument module in front of its console for repair or test purposes has been equipped with a rotatable mount and means for locking the mount at various angles of rotation for easy accessibility. The rotatable mount includes a horizontal conduit supported by bearings within the blank module. The conduit is spring-biased in a retracted position within the blank module and in this position a small gear mounted on the conduit periphery is locked by a fixed pawl. The conduit and instrument mount can be pulled into an extended position with the gear clearing the pawl to permit rotation and adjustment of the instrument.
... is the device most commonly used for external beam radiation treatments for patients with cancer. The linear ... shape of the patient's tumor and the customized beam is directed to the patient's tumor. The beam ...
Isolated linear blaschkoid psoriasis.
Nasimi, M; Abedini, R; Azizpour, A; Nikoo, A
2016-10-01
Linear psoriasis (LPs) is considered a rare clinical presentation of psoriasis, which is characterized by linear erythematous and scaly lesions along the lines of Blaschko. We report the case of a 20-year-old man who presented with asymptomatic linear and S-shaped erythematous, scaly plaques on right side of his trunk. The plaques were arranged along the lines of Blaschko with a sharp demarcation at the midline. Histological examination of a skin biopsy confirmed the diagnosis of psoriasis. Topical calcipotriol and betamethasone dipropionate ointments were prescribed for 2 months. A good clinical improvement was achieved, with reduction in lesion thickness and scaling. In patients with linear erythematous and scaly plaques along the lines of Blaschko, the diagnosis of LPs should be kept in mind, especially in patients with asymptomatic lesions of late onset. PMID:27663156
NASA Technical Reports Server (NTRS)
Laughlin, Darren
1995-01-01
Inertial linear actuators developed to suppress residual accelerations of nominally stationary or steadily moving platforms. Function like long-stroke version of voice coil in conventional loudspeaker, with superimposed linear variable-differential transformer. Basic concept also applicable to suppression of vibrations of terrestrial platforms. For example, laboratory table equipped with such actuators plus suitable vibration sensors and control circuits made to vibrate much less in presence of seismic, vehicular, and other environmental vibrational disturbances.
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus. PMID:27625568
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus.
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus. PMID:27625568
Parenting Perfectionism and Parental Adjustment.
Lee, Meghan A; Schoppe-Sullivan, Sarah J; Kamp Dush, Claire M
2012-02-01
The parental role is expected to be one of the most gratifying and rewarding roles in life. As expectations of parenting become ever higher, the implications of parenting perfectionism for parental adjustment warrant investigation. Using longitudinal data from 182 couples, this study examined the associations between societal- and self-oriented parenting perfectionism and new mothers' and fathers' parenting self-efficacy, stress, and satisfaction. For mothers, societal-oriented parenting perfectionism was associated with lower parenting self-efficacy, but self-oriented parenting perfectionism was associated with higher parenting satisfaction. For fathers, societal-oriented parenting perfectionism was associated with higher parenting stress, whereas higher levels of self-oriented parenting perfectionism were associated with higher parenting self-efficacy, lower parenting stress, and greater parenting satisfaction. These findings support the distinction between societal- and self-oriented perfectionism, extend research on perfectionism to interpersonal adjustment in the parenting domain, and provide the first evidence for the potential consequences of holding excessively high standards for parenting. PMID:22328797
Proportional assist ventilation and neurally adjusted ventilatory assist.
Kacmarek, Robert M
2011-02-01
Patient-ventilator synchrony is a common problem with all patients actively triggering the mechanical ventilator. In many cases synchrony can be improved by vigilant adjustments by the managing clinician. However, in most institutions clinicians are not able to spend the time necessary to ensure synchrony in all patients. Proportional assist ventilation (PAV) and neurally adjusted ventilatory assist (NAVA) were both developed to improve patient-ventilator synchrony by proportionally unloading ventilatory effort and turning control of the ventilatory pattern over to the patient. This paper discusses PAV's and NAVA's theory of operation, general process of application, and the supporting literature.
Be aware of the Adjusted Treatment Index.
Langford, Melvyn
2015-10-01
The authors of the interim report relating to the Review of Operational Productivity in NHS providers, published in June of this year, are, as many will know, developing a set of Adjusted Treatment Index (ATI) metrics, and are also to publish a model of their interpretation of what an estates department should look like in terms of its operational productivity and cost. This article argues that the underlying reason for the past failures was the creation of static 'point-value' metrics similar to the ATIs proposed, and that this can only be overcome by designing and populating a series of non-linear dynamic simulation models with feedback control of an organisation's estate in relation to its asset base and condition with respect to time, together with the resultant financial capital and revenue consequences. It concludes by calling on IHEEM's Council to urgently make representation to the authors of the June 2015 report, and suggests that the Institute's members be fully involved in the design, testing, and interpretation, of the estates model and ATIs. IHEEM's Technology Platforms are ideally placed to play a central role in this. PMID:26750025
Adjustable Josephson Coupler for Transmon Qubit Measurement
NASA Astrophysics Data System (ADS)
Jeffrey, Evan
2015-03-01
Transmon qubits are measured via a dispersive interaction with a linear resonator. In order to be scalable this measurement must be fast, accurate, and not disrupt the state of the qubit. Speed is of particular importance in a scalable architecture with error correction as the measurement accounts for substantial portion of the cycle time and waiting time associated with measurement is a major source of decoherence. We have found that measurement speed and accuracy can be improved by driving the qubit beyond the critical photon number ncrit = Δ/4g by a factor of 2-3 without compromising the QND nature of the measurement. While it is expected that such strong drive will cause qubit state transitions, we find that as long as the readout is sufficiently fast, those transitions are negligible, however they grow rapidly with time, and are not described by a simple rate. Measuring in this regime requires parametric amplifiers with very high saturation power, on the order of -105 dBm in order to avoid losing SNR when increasing the power. It also requires a Purcell filter to allow fast ring-up and ring-down. Adjustable couplers can be used to further increase the measurement performance, by switching the dispersive interaction on and off much faster than the cavity ring-down time. This technique can also be used to investigate the dynamics of the qubit cavity interaction beyond the weak dispersive limit ncavity >=ncrit not easily accessible to standard dispersive measurement due to the cavity time constant.
... to your desktop! more... What Is a General Dentist? Article Chapters What Is a General Dentist? General ... Reviewed: January 2012 ?xml:namespace> Related Articles: General Dentists FAGD and MAGD: What Do These Awards Mean? ...
Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Boskovic, Jovan D.
2008-01-01
This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.
Kim, Choong; Lee, Kangsun; Kim, Jong Hyun; Shin, Kyeong Sik; Lee, Kyu-Jung; Kim, Tae Song; Kang, Ji Yoon
2008-03-01
In this paper, we propose a serial dilution microfluidic chip which is able to generate logarithmic or linear step-wise concentrations. These concentrations were generated via adjustments in the flow rate of two converging fluids at the channel junctions of the ladder network. The desired dilution ratios are almost independent of the flow rate or diffusion length of molecules, as the dilution device is influenced only by the ratio of volumetric flow rates. Given a set of necessary dilution ratios, whether linear or logarithmic, a serial dilution chip can be constructed via the modification of a microfluidic resistance network. The design principle was suggested and both the logarithmic and linear dilution chips were fabricated in order to verify their performance in accordance with the fluorescence intensity. The diluted concentrations of a fluorescein solution in the microfluidic device evidenced relatively high linearity, and the cytotoxicity test of MCF-7 breast cancer cells via the logarithmic dilution chip was generally consistent with the results generated with manual dilution.