Linearly Adjustable International Portfolios
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-09-30
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
Eberly, Lynn E
2007-01-01
This chapter describes multiple linear regression, a statistical approach used to describe the simultaneous associations of several variables with one continuous outcome. Important steps in using this approach include estimation and inference, variable selection in model building, and assessing model fit. The special cases of regression with interactions among the variables, polynomial regression, regressions with categorical (grouping) variables, and separate slopes models are also covered. Examples in microbiology are used throughout. PMID:18450050
Multiple comparisons for survival data with propensity score adjustment
Zhu, Hong; Lu, Bo
2015-01-01
This article considers the practical problem in clinical and observational studies where multiple treatment or prognostic groups are compared and the observed survival data are subject to right censoring. Two possible formulations of multiple comparisons are suggested. Multiple Comparisons with a Control (MCC) compare every other group to a control group with respect to survival outcomes, for determining which groups are associated with lower risk than the control. Multiple Comparisons with the Best (MCB) compare each group to the truly minimum risk group and identify the groups that are either with the minimum risk or the practically minimum risk. To make a causal statement, potential confounding effects need to be adjusted in the comparisons. Propensity score based adjustment is popular in causal inference and can effectively reduce the confounding bias. Based on a propensity-score-stratified Cox proportional hazards model, the approaches of MCC test and MCB simultaneous confidence intervals for general linear models with normal error outcome are extended to survival outcome. This paper specifies the assumptions for causal inference on survival outcomes within a potential outcome framework, develops testing procedures for multiple comparisons and provides simultaneous confidence intervals. The proposed methods are applied to two real data sets from cancer studies for illustration, and a simulation study is also presented. PMID:25663729
Practical Session: Multiple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).
Multiple Linear Regression: A Realistic Reflector.
ERIC Educational Resources Information Center
Nutt, A. T.; Batsell, R. R.
Examples of the use of Multiple Linear Regression (MLR) techniques are presented. This is done to show how MLR aids data processing and decision-making by providing the decision-maker with freedom in phrasing questions and by accurately reflecting the data on hand. A brief overview of the rationale underlying MLR is given, some basic definitions…
NASA Astrophysics Data System (ADS)
Kheloufi, N.; Kahlouche, S.; Lamara, R. Ait Ahmed
2009-04-01
The resolution of the MRE's (Multiple Regression Equations) is an important tool for fitting different geodetic network. Nevertheless, in different fields of engineering and earth science, certain cases need more accuracy; the ordinary least squares (linear least squares) prove to be limited. Thus, we have to use new numerical methods of resolution that can provide a great efficiency of polynomial modelisation. In geodesy the accuracy of coordinates determination and network adjustment is very important, that's why instead of being limited to the linear models, we have to apply the non linear least squares resolution for the transformation problem between geodetic systems. This need, appears especially in the case of Nord-Sahara datum (Algeria), on wich the linear models are not much appropriate, because of the lack of information about the geoid's undulation. In this paper, we have fixed as main aim, to carry out the importance of using non linear least squares to improve the quality of geodetic adjustment and coordinates transformation and even the extent of his use. The algorithms carried out concerned the application of two models: three dimensions (global transformation) and the two-dimension one (local transformation) over huge area (Algeria). We compute coordinates transformation parameters and their Rms by both of the ordinary least squares and new algorithms, then we perform a statistical analysis in order to compare on the one hand between the linear adjustment with its two variants (local and global) and the non linear one. In this context, a set of 16 benchmark, have been integrated to compute the transformation parameters (3D and 2D). Different non linear optimization algorithms (Newton algorithm, Steepest Descent, and Levenberg-Marquardt) have been implemented to solve transformation problem. Conclusions and recommendations are given with respect to the suitability, accuracy and efficiency of each method. Key words: MRE's, Nord Sahara, global
Multiple linear regression for isotopic measurements
NASA Astrophysics Data System (ADS)
Garcia Alonso, J. I.
2012-04-01
There are two typical applications of isotopic measurements: the detection of natural variations in isotopic systems and the detection man-made variations using enriched isotopes as indicators. For both type of measurements accurate and precise isotope ratio measurements are required. For the so-called non-traditional stable isotopes, multicollector ICP-MS instruments are usually applied. In many cases, chemical separation procedures are required before accurate isotope measurements can be performed. The off-line separation of Rb and Sr or Nd and Sm is the classical procedure employed to eliminate isobaric interferences before multicollector ICP-MS measurement of Sr and Nd isotope ratios. Also, this procedure allows matrix separation for precise and accurate Sr and Nd isotope ratios to be obtained. In our laboratory we have evaluated the separation of Rb-Sr and Nd-Sm isobars by liquid chromatography and on-line multicollector ICP-MS detection. The combination of this chromatographic procedure with multiple linear regression of the raw chromatographic data resulted in Sr and Nd isotope ratios with precisions and accuracies typical of off-line sample preparation procedures. On the other hand, methods for the labelling of individual organisms (such as a given plant, fish or animal) are required for population studies. We have developed a dual isotope labelling procedure which can be unique for a given individual, can be inherited in living organisms and it is stable. The detection of the isotopic signature is based also on multiple linear regression. The labelling of fish and its detection in otoliths by Laser Ablation ICP-MS will be discussed using trout and salmon as examples. As a conclusion, isotope measurement procedures based on multiple linear regression can be a viable alternative in multicollector ICP-MS measurements.
Social-psychological adjustment to multiple sclerosis. A longitudinal study.
Brooks, N A; Matson, R R
1982-01-01
This study employs a longitudinal design to analyze the adjustment process of 103 people diagnosed with multiple sclerosis and in the middle and later stages of their illness careers. The mean age of the sample at Time 2 is 52 years, and mean duration since diagnosis is 17 years. A highly reliable self concept measure is the indicator of adjustment and changes in adjustment from T1 (1974) to T2 (1981). Four sets of variables are analyzed in their relationship to adjustment: (1) socio-demographic; (2) disease-related; (3) medical; and (4) social-psychological. Females are more likely than males to show positive adjustment (improving self concepts). Hours of employment and living arrangement are also related to the adjustment process. The vast majority of respondents show only slight decline in mobility, but among the disease related variables, number of episodes (exacerbations) in past seven years is the strongest predictor of changes in adjustment. Nearly half the respondents seek medical attention for their M.S. once a year or less, and the choice of health care professional is related to changes in the course of the disease. Subjects with an internal locus of control have more positive adjustment scores. Those who say they cope through acceptance of the disease show improvements in self concept while those reporting religion or family as major coping strategies have decreasing self concepts. Results indicate that the majority make satisfactory adjustment as indicated by maintenance of positive self concepts over the 7 year period, although the disease is chronic and progressive. For patients in the middle and later stages of illness careers, the data suggest comprehensive rehabilitation efforts that enhance autonomy and develop the social-psychological resources of the lifestyle. PMID:7157043
A General Linear Model Approach to Adjusting the Cumulative GPA.
ERIC Educational Resources Information Center
Young, John W.
A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…
A Constrained Linear Estimator for Multiple Regression
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.
2010-01-01
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
ERIC Educational Resources Information Center
Rodgers, Jennifer; Calder, Peter
1990-01-01
Examined relationship of marital adjustment and level of disability of persons with multiple sclerosis (n=104) to emotional adjustment. Found emotional adjustment significantly related to perceived level of marital adjustment, but no relationship found for level of disability. Results suggest, although marital adjustment is important for emotional…
Adjusting for matching and covariates in linear discriminant analysis
Asafu-Adjei, Josephine K.; Sampson, Allan R.; Sweet, Robert A.; Lewis, David A.
2013-01-01
In studies that compare several diagnostic or treatment groups, subjects may not only be measured on a certain set of feature variables, but also be matched on a number of demographic characteristics and measured on additional covariates. Linear discriminant analysis (LDA) is sometimes used to identify which feature variables best discriminate among groups, while accounting for the dependencies among the feature variables. We present a new approach to LDA for multivariate normal data that accounts for the subject matching used in a particular study design, as well as covariates not used in the matching. Applications are given for post-mortem tissue data with the aim of comparing neurobiological characteristics of subjects with schizophrenia with those of normal controls, and for a post-mortem tissue primate study comparing brain biomarker measurements across three treatment groups. We also investigate the performance of our approach using a simulation study. PMID:23640791
Conflict adjustment through domain-specific multiple cognitive control mechanisms.
Kim, Chobok; Chung, Chongwook; Kim, Jeounghoon
2012-03-20
Cognitive control is required to regulate conflict between relevant and irrelevant information. Although previous neuroimaging studies have focused on response conflict, recent studies suggested that distinct neural networks are recruited in regulating perceptual conflict. The aim of the current study was to distinguish between brain areas involved in detecting and regulating perceptual conflict using a conflict adjustment paradigm. The Stroop color-matching task was combined with an arrow version of the Stroop task in order to independently manipulate perceptual and response conflicts. Behavioral results showed that post-conflict adjustment for perceptual and response conflicts were independent from each other. Imaging results demonstrated that the caudal portion of the dorsal cingulate cortex (cdACC) was selectively associated with the occurrence of perceptual conflict, whereas the left dorsal portion of the premotor cortex (pre-PMd) was selectively associated with both preceding and current perceptual conflict trials. Furthermore, the rostral portion of the dorsal cingulate cortex (rdACC) was selectively linked with response conflict, whereas the left dorsolateral prefrontal cortex (DLPFC) was selectively involved in both preceding and current response conflict trials. We suggest that cdACC is involved in detecting perceptual conflict and left pre-PMd is involved in regulating perceptual conflict, which is analogous to the recruitment of rdACC and left DLPFC in control processes for response conflict. Our findings provide support for the hypothesis that multiple independent monitor-controller loops are implemented in the frontal cognitive control system. PMID:22305142
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Estimating Statistical Power When Making Adjustments for Multiple Tests
ERIC Educational Resources Information Center
Porter, Kristin E.
2016-01-01
In recent years, there has been increasing focus on the issue of multiple hypotheses testing in education evaluation studies. In these studies, researchers are typically interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time or across multiple treatment groups. When…
Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment
ERIC Educational Resources Information Center
Frane, Andrew V.
2015-01-01
Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…
Covariate-Adjusted Linear Mixed Effects Model with an Application to Longitudinal Data
Nguyen, Danh V.; Şentürk, Damla; Carroll, Raymond J.
2009-01-01
Linear mixed effects (LME) models are useful for longitudinal data/repeated measurements. We propose a new class of covariate-adjusted LME models for longitudinal data that nonparametrically adjusts for a normalizing covariate. The proposed approach involves fitting a parametric LME model to the data after adjusting for the nonparametric effects of a baseline confounding covariate. In particular, the effect of the observable covariate on the response and predictors of the LME model is modeled nonparametrically via smooth unknown functions. In addition to covariate-adjusted estimation of fixed/population parameters and random effects, an estimation procedure for the variance components is also developed. Numerical properties of the proposed estimators are investigated with simulation studies. The consistency and convergence rates of the proposed estimators are also established. An application to a longitudinal data set on calcium absorption, accounting for baseline distortion from body mass index, illustrates the proposed methodology. PMID:19266053
Multiple Linear Regression as a Technique for Predicting College Enrollment.
ERIC Educational Resources Information Center
Clegg, Ambrose A.; And Others
The application of multiple linear regression to the problem of identifying appropriate criterion variables and predicting enrollment in college courses during a period of major rapid decline was studied. Data were gathered on course enrollments for 1972-78 at Kent State University, and five independent variables were selected to determine the…
Effect of Multiple Testing Adjustment in Differential Item Functioning Detection
ERIC Educational Resources Information Center
Kim, Jihye; Oshima, T. C.
2013-01-01
In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…
Adjustable Permanent Quadrupoles Using Rotating Magnet Material Rods for the Next Linear Collider
James T Volk et al.
2001-09-24
The proposed Next Linear Collider (NLC) will require over 1400 adjustable quadrupoles between the main linacs' accelerator structures. These 12.7 mm bore quadrupoles will have a range of integrated strength from 0.6 to 132 Tesla, with a maximum gradient of 135 Tesla per meter, an adjustment range of +0-20% and effective lengths from 324 mm to 972 mm. The magnetic center must remain stable to within 1 micrometer during the 20% adjustment. In an effort to reduce estimated costs and increase reliability, several designs using hybrid permanent magnets have been developed. All magnets have iron poles and use either Samarium Cobalt or Neodymium Iron to provide the magnetic fields. Two prototypes use rotating rods containing permanent magnetic material to vary the gradient. Gradient changes of 20% and center shifts of less than 20 microns have been measured. These data are compared to an equivalent electromagnet prototype.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
The information presented in this user's guide is directed to air pollution scientists interested in applying air quality simulation models. MPTER is the designation for Multiple Point source algorithm with TERrain adjustments. This algorithm is useful for estimating air quality ...
A consistent local linear estimator of the covariate adjusted correlation coefficient
Nguyen, Danh V.; Şentürk, Damla
2009-01-01
Consider the correlation between two random variables (X, Y), both not directly observed. One only observes X̃ = φ1(U)X + φ2(U) and Ỹ = ψ1(U)Y + ψ2(U), where all four functions {φl(·),ψl(·), l = 1, 2} are unknown/unspecified smooth functions of an observable covariate U. We consider consistent estimation of the correlation between the unobserved variables X and Y, adjusted for the above general dual additive and multiplicative effects of U, based on the observed data (X̃, Ỹ, U). PMID:21720454
Galerkin projection methods for solving multiple related linear systems
Chan, T.F.; Ng, M.; Wan, W.L.
1996-12-31
We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.
A scalable parallel algorithm for multiple objective linear programs
NASA Technical Reports Server (NTRS)
Wiecek, Malgorzata M.; Zhang, Hong
1994-01-01
This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.
Adjusted p-values for SGoF multiple test procedure.
Castro-Conde, Irene; de Uña-Álvarez, Jacobo
2015-01-01
In the field of multiple comparison procedures, adjusted p-values are an important tool to evaluate the significance of a test statistic while taking the multiplicity into account. In this paper, we introduce adjusted p-values for the recently proposed Sequential Goodness-of-Fit (SGoF) multiple test procedure by letting the level of the test vary on the unit interval. This extends previous research on the SGoF method, which is a method of high interest when one aims to increase the statistical power in a multiple testing scenario. The adjusted p-value is the smallest level at which the SGoF procedure would still reject the given null hypothesis, while controlling for the multiplicity of tests. The main properties of the adjusted p-values are investigated. In particular, we show that they are a subset of the original p-values, being equal to 1 for p-values above a certain threshold. These are very useful properties from a numerical viewpoint, since they allow for a simplified method to compute the adjusted p-values. We introduce a modification of the SGoF method, termed majorant version, which rejects the null hypotheses with adjusted p-values below the level. This modification rejects more null hypotheses as the level increases, something which is not in general the case for the original SGoF. Adjusted p-values for the conservative version of the SGoF procedure, which estimates the variance without assuming that all the null hypotheses are true, are also included. The situation with ties among the p-values is discussed too. Several real data applications are investigated to illustrate the practical usage of adjusted p-values, ranging from a small to a large number of tests. PMID:25323102
High speed web printing inspection with multiple linear cameras
NASA Astrophysics Data System (ADS)
Shi, Hui; Yu, Wenyong
2011-12-01
Purpose: To detect the defects during the high speed process of web printing, such as smudges, doctor streaks, pin holes, character misprints, foreign matters, hazing, wrinkles, etc., which are the main infecting factors to the quality of printing presswork. Methods: A set of novel machine vision system is used to detect the defects. This system consists of distributed data processing with multiple linear cameras, effective anti-blooming illumination design and fast image processing algorithm with blob searching. Also, pattern matching adapted to paper tension and snake-moving are emphasized. Results: Experimental results verify the speed, reliability and accuracy of the proposed system, by which most of the main defects are inspected at real time under the speed of 300 m/min. Conclusions: High speed quality inspection of large-size web requires multiple linear cameras to construct distributed data processing system. Also material characters of the printings should also be stressed to design proper optical structure, so that tiny web defects can be inspected with variably angles of illumination.
Direction of Effects in Multiple Linear Regression Models.
Wiedermann, Wolfgang; von Eye, Alexander
2015-01-01
Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed. PMID:26609741
Precipitation interpolation in mountainous regions using multiple linear regression
Hay, L.; Viger, R.; McCabe, G.
1998-01-01
Multiple linear regression (MLR) was used to spatially interpolate precipitation for simulating runoff in the Animas River basin of southwestern Colorado. MLR equations were defined for each time step using measured precipitation as dependent variables. Explanatory variables used in each MLR were derived for the dependent variable locations from a digital elevation model (DEM) using a geographic information system. The same explanatory variables were defined for a 5 ?? 5 km grid of the DEM. For each time step, the best MLR equation was chosen and used to interpolate precipitation onto the 5 ?? 5 km grid. The gridded values of precipitation provide a physically-based estimate of the spatial distribution of precipitation and result in reliable simulations of daily runoff in the Animas River basin.
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
The Impact of Parental Multiple Sclerosis on the Adjustment of Children and Adolescents.
ERIC Educational Resources Information Center
De Judicibus, Margaret A.; McCabe, Marita P.
2004-01-01
Thirty-one parents with multiple sclerosis (MS) participated in a study to investigate the adjustment of their children, 24 boys and 24 girls aged 4 to 16 years. The majority of parents believed that their illness had an effect on their children. The perception of parents regarding their children's problems in the areas of emotions, concentration,…
48 CFR 552.216-70 - Economic Price Adjustment-FSS Multiple Award Schedule Contracts.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Economic Price Adjustment... Text of Provisions and Clauses 552.216-70 Economic Price Adjustment—FSS Multiple Award Schedule Contracts. As prescribed in 516.203-4(a), insert the following clause: Economic Price...
48 CFR 552.216-70 - Economic Price Adjustment-FSS Multiple Award Schedule Contracts.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Economic Price Adjustment... Text of Provisions and Clauses 552.216-70 Economic Price Adjustment—FSS Multiple Award Schedule Contracts. As prescribed in 516.203-4(a), insert the following clause: Economic Price...
Couple Coping and Adjustment to Multiple Sclerosis in Care Receiver-Carer Dyads.
ERIC Educational Resources Information Center
Pakenham, Kenneth I.
1998-01-01
The utility of "coping congruency" and "average level of couple coping" in explaining adjustment to multiple sclerosis was examined. Interview and questionnaire data was collected for 45 dyads with a 12-month follow-up. Predictors include Time 1 illness, caregiving, and coping variables. Findings support both concepts for explaining collective and…
48 CFR 552.216-70 - Economic Price Adjustment-FSS Multiple Award Schedule Contracts.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Economic Price Adjustment... Text of Provisions and Clauses 552.216-70 Economic Price Adjustment—FSS Multiple Award Schedule Contracts. As prescribed in 516.203-4(a), insert the following clause: Economic Price...
48 CFR 552.216-70 - Economic Price Adjustment-FSS Multiple Award Schedule Contracts.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Economic Price Adjustment... Text of Provisions and Clauses 552.216-70 Economic Price Adjustment—FSS Multiple Award Schedule Contracts. As prescribed in 516.203-4(a), insert the following clause: Economic Price...
2009-01-01
Background Multiple Sclerosis (MS) is an incurable, chronic, potentially progressive and unpredictable disease of the central nervous system. The disease produces a range of unpleasant and debilitating symptoms, which can have a profound impact including disrupting activities of daily living, employment, income, relationships, social and leisure activities, and life goals. Adjusting to the illness is therefore particularly challenging. This trial tests the effectiveness of a Cognitive Behavioural intervention compared to Supportive Listening to assist adjustment in the early stages of MS. Methods/Design This is a two arm randomized multi-centre parallel group controlled trial. 122 consenting participants who meet eligibility criteria will be randomly allocated to receive either Cognitive Behavioral Therapy or Supportive Listening. Eight one hour sessions of therapy (delivered over a period of 10 weeks) will be delivered by general nurses trained in both treatments. Self-report questionnaire data will be collected at baseline (0 weeks), mid-therapy (week 5 of therapy), post-therapy (15 weeks) and at six months (26 weeks) and twelve months (52 weeks) follow-up. Primary outcomes are distress and MS-related social and role impairment at twelve month follow-up. Analysis will also consider predictors and mechanisms of change during therapy. In-depth interviews to examine participants' experiences of the interventions will be conducted with a purposively sampled sub-set of the trial participants. An economic analysis will also take place. Discussion This trial is distinctive in its aims in that it aids adjustment to MS in a broad sense. It is not a treatment specifically for depression. Use of nurses as therapists makes the interventions potentially viable in terms of being rolled out in the NHS. The trial benefits from incorporating patient input in the development and evaluation stages. The trial will provide important information about the efficacy, cost
Extreme inputs/outputs for multiple input multiple output linear systems.
Smallwood, David Ora
2005-09-01
A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the auto spectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the auto spectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input auto spectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one will result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.
Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs
Smallwood, David O.
2007-01-01
A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less
NASA Astrophysics Data System (ADS)
Chang, F.; Chiang, Y.
2011-12-01
Prediction of extreme rainfall-runoff events is the key issue for flood mitigation. In this study, the effectiveness of merging data obtained from gauges, radars and satellite-derived precipitation products through a bias adjustment procedure is investigated. First, the contribution of rainfall information to hydrological responses is individually evaluated in terms of two scenarios: with and without bias adjustment. Besides, the applicability of the constructed bias adjustment procedure to the removal of observational errors can also be verified by comparing the forecasts obtained from the above two scenarios. Finally, the artificial neural networks (ANN) and the Bayesian approach are conducted to merge multiple rainfall information. Figure 1 shows the error changes of different bias conditions when adjusting the observational biases of gauge measurements. It is clear that the minimum training errors can be found when the values of parameters a and b are 1.1 and 2 in Figure 1(i), respectively. For the same values in the validation phase, the error is also close to an optimal solution. The optimization indicates that the gauge measurements have a bias error of 10% underestimation and a random error of about 2 mm/hr. Results given in Table 1 indicate that all precipitation products are biased and can be appropriately adjusted with an improvement rate, in terms of their contribution to flood forecasting in the testing phase, of about 9% for gauges, 17% for radars and 17% for satellites, respectively. Moreover, this study also demonstrates that the merged rainfall product is more stable and reliable as compared with unmerged rainfall information in terms of its contribution to flood forecasting. This study provides a potential approach for merging multiple rainfall information over mountainous watersheds where observational biases may occur in gauge measurements. Keywords: Remote Sensing, Bias Adjustment, Artificial Neural Network, Data Merge.
Technology Transfer Automated Retrieval System (TEKTRAN)
A stochastic/linear program Excel workbook was developed consisting of two worksheets illustrating linear and stochastic program approaches. Both approaches used the Excel Solver add-in. A published linear program problem served as an example for the ingredients, nutrients and costs and as a benchma...
Sample Sizes when Using Multiple Linear Regression for Prediction
ERIC Educational Resources Information Center
Knofczynski, Gregory T.; Mundfrom, Daniel
2008-01-01
When using multiple regression for prediction purposes, the issue of minimum required sample size often needs to be addressed. Using a Monte Carlo simulation, models with varying numbers of independent variables were examined and minimum sample sizes were determined for multiple scenarios at each number of independent variables. The scenarios…
Multiple origins of linear dunes on Earth and Titan
Rubin, David M.; Hesp, Patrick A.
2009-01-01
Dunes with relatively long and parallel crests are classified as linear dunes. On Earth, they form in at least two environmental settings: where winds of bimodal direction blow across loose sand, and also where single-direction winds blow over sediment that is locally stabilized, be it through vegetation, sediment cohesion or topographic shelter from the winds. Linear dunes have also been identified on Titan, where they are thought to form in loose sand. Here we present evidence that in the Qaidam Basin, China, linear dunes are found downwind of transverse dunes owing to higher cohesiveness in the downwind sediments, which contain larger amounts of salt and mud. We also present a compilation of other settings where sediment stabilization has been reported to produce linear dunes. We suggest that in this dune-forming process, loose sediment accumulates on the dunes and is stabilized; the stable dune then functions as a topographic shelter, which induces the deposition of sediments downwind. We conclude that a model in which Titan's dunes formed similarly in cohesive sediments cannot be ruled out by the existing data.
Interpreting Multiple Linear Regression: A Guidebook of Variable Importance
ERIC Educational Resources Information Center
Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim
2012-01-01
Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…
Rossi, D J; Kress, D D; Tess, M W; Burfening, P J
1992-05-01
Standard linear adjustment of weaning weight to a constant age has been shown to introduce bias in the adjusted weight due to nonlinear growth from birth to weaning of beef calves. Ten years of field records from the five strains of Beefbooster Cattle Alberta Ltd. seed stock herds were used to investigate the use of correction factors to adjust standard 180-d weight (WT180) for this bias. Statistical analyses were performed within strain and followed three steps: 1) the full data set was split into an estimation set (ES) and a validation set (VS), 2) WT180 from the ES was used to develop estimates of correction factors using a model including herd (H), year (YR), age of dam (DA), sex of calf (S), all two and three-way interactions, and any significant linear and quadratic covariates of calf age at weaning deviated from 180 d (DEVCA) and interactions between DEVCA and DA, S or DA x S, and 3) significant DEVCA coefficients were used to correct WT180 from the VS, then WT180 and the corrected weight (WTCOR) from the VS were analyzed with the same model as in Step 2 and significance of DEVCA terms were compared. Two types of data splitting were used. Adjusted R2 was calculated to describe the proportion of total variation of DEVCA terms explained for WT180 from the ES. The DEVCA terms explained .08 to 1.54% of the total variation for the five strains. Linear and quadratic correction factors were both positive and negative. Bias in WT180 from the ES within 180 +/- 35 d of age ranged from 2.8 to 21.7 kg.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1526901
Code of Federal Regulations, 2010 CFR
2010-10-01
... and Service Contract Act-Price Adjustment (Multiple Year and Option Contracts). 52.222-43 Section 52... Standards Act and Service Contract Act—Price Adjustment (Multiple Year and Option Contracts). As prescribed...—Price Adjustment (Multiple Year and Option Contracts) (SEP 2009) (a) This clause applies to...
Linearized theory of inhomogeneous multiple 'water-bag' plasmas
NASA Technical Reports Server (NTRS)
Bloomberg, H. W.; Berk, H. L.
1973-01-01
Equations are derived for describing the inhomogeneous equilibrium and small deviations from the equilibrium, giving particular attention to systems with trapped particles. An investigation is conducted of periodic systems with a single trapped-particle water bag, taking into account the behavior of the perturbation equations at the turning points. An outline is provided concerning a procedure for obtaining the eigenvalues. The results of stability calculations connected with the sideband effects are considered along with questions regarding the general applicability of the multiple water-bag approach in stability calculations.
Linear Fitted-Q Iteration with Multiple Reward Functions
Lizotte, Daniel J.; Bowling, Michael; Murphy, Susan A.
2013-01-01
We present a general and detailed development of an algorithm for finite-horizon fitted-Q iteration with an arbitrary number of reward signals and linear value function approximation using an arbitrary number of state features. This includes a detailed treatment of the 3-reward function case using triangulation primitives from computational geometry and a method for identifying globally dominated actions. We also present an example of how our methods can be used to construct a real-world decision aid by considering symptom reduction, weight gain, and quality of life in sequential treatments for schizophrenia. Finally, we discuss future directions in which to take this work that will further enable our methods to make a positive impact on the field of evidence-based clinical decision support. PMID:23741197
Uncovering Local Trends in Genetic Effects of Multiple Phenotypes via Functional Linear Models.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Barondess, David A; Tong, Xiaoren; Jadhav, Sneha; Lu, Qing
2016-04-01
Recent technological advances equipped researchers with capabilities that go beyond traditional genotyping of loci known to be polymorphic in a general population. Genetic sequences of study participants can now be assessed directly. This capability removed technology-driven bias toward scoring predominantly common polymorphisms and let researchers reveal a wealth of rare and sample-specific variants. Although the relative contributions of rare and common polymorphisms to trait variation are being debated, researchers are faced with the need for new statistical tools for simultaneous evaluation of all variants within a region. Several research groups demonstrated flexibility and good statistical power of the functional linear model approach. In this work we extend previous developments to allow inclusion of multiple traits and adjustment for additional covariates. Our functional approach is unique in that it provides a nuanced depiction of effects and interactions for the variables in the model by representing them as curves varying over a genetic region. We demonstrate flexibility and competitive power of our approach by contrasting its performance with commonly used statistical tools and illustrate its potential for discovery and characterization of genetic architecture of complex traits using sequencing data from the Dallas Heart Study. PMID:27027515
Latino risk-adjusted mortality in the men screened for the Multiple Risk Factor Intervention Trial.
Thomas, Avis J; Eberly, Lynn E; Neaton, James D; Smith, George Davey
2005-09-15
Latinos are now the largest minority in the United States, but their distinctive health needs and mortality patterns remain poorly understood. Proportional hazards regressions were used to compare Latino versus White risk- and income-adjusted mortality over 25 years' follow-up from 5,846 Latino and 300,647 White men screened for the Multiple Risk Factor Intervention Trial. Men were aged 35-57 years and residing in 14 states when screened in 1973-1975. Data on coronary heart disease risk factors, self-reported race/ethnicity, and home addresses were obtained at baseline; income was estimated by linking addresses to census data. Mortality follow-up through 1999 was obtained using the National Death Index. The fully adjusted Latino/White hazard ratio for all-cause mortality was 0.82 (95% confidence interval (CI): 0.77, 0.87), based on 1,085 Latino and 73,807 White deaths; this pattern prevailed over time and across states (thus, likely across Latino subgroups). Hazard ratios were significantly greater than one for stroke (hazard ratio = 1.30, 95% CI: 1.01, 1.68), liver cancer (hazard ratio = 2.02, 95% CI: 1.21, 3.37), and infection (hazard ratio = 1.69, 95% CI: 1.24, 2.32). A substudy found only minor racial/ethnic differences in the quality of Social Security numbers, birth dates, soundex-adjusted names, and National Death Index searches. Results were not likely an artifact of return migration or incomplete mortality data. PMID:16076831
A Qualitative Analysis of Life Course Adjustment to Multiple Morbidity and Disability
Harrison, Tracie; Taylor, Jessica; Fredland, Nina; Stuifbergen, Alexa; Walker, Janiece; Choban, Robin
2012-01-01
The accumulation of limitations over the life course requires that women re-adapt to environmental barriers that they encounter over time. The purpose of this qualitative case study is to detail the life experiences associated with living with mobility, cognitive, and sensory loss experienced by a woman and her sister who participated in an on-going ethnographic study of mobility impairment in women. In-depth interviews were subjected to thematic, life course analysis. A family case study was interpreted as an exemplar for aging with early onset disability into multiple morbidity, which was described as a series of loss, recovery and re-engagement. Within the case study, the participant suggested that because her functional limitations were not accommodated earlier in life due to societal and family level disadvantage, functional limitations were more difficult to adjust to in later years. PMID:23437442
Ambrosiadou, B V; Goulis, D G; Pappas, C
1996-01-01
A performance evaluation of the DIABETES rule-based expert system prototype for clinical decision making is presented. The system facilitates multiple insulin regimen and dose adjustment of insulin dependent Type I or II diabetic patients. The study was performed on 600 subjects from two diabetological centres and three diabetological offices of Greek hospitals. The responses of the attendant medical doctors were compared with those of the DIABETES system, with the aid of a specifically devised valuation range (0-5 degrees, 0 indicating full agreement and 5 full disagreement). The capabilities and the weakness of the system in terms of its practicality for decision support in assisting therapy of diabetes mellitus by blood glucose monitoring and subsequent insulin dose adjustment are discussed. The potential benefits of decision support systems for diabetic patient management are seen to be the cost saving they provide in terms of man-hours of verbal instruction by medical experts, the support in terms of objective and consistent decision making, as well as the recording of medical knowledge in the ill-defined field of insulin administration, thus aiding the education and training of medical personnel. PMID:8646833
As a fast and effective technique, the multiple linear regression (MLR) method has been widely used in modeling and prediction of beach bacteria concentrations. Among previous works on this subject, however, several issues were insufficiently or inconsistently addressed. Those is...
Jian, Shih-Jie; Kou, Chwung-Shan; Hwang, Jennchang; Lee, Chein-Dhau; Lin, Wei-Cheng
2013-06-15
A method for controlling the pretilt angles of liquid crystals (LC) was developed. Hexamethyldisiloxane polymer films were first deposited on indium tin oxide coated glass plates using a linear atmospheric pressure plasma source. The films were subsequently treated with the rubbing method for LC alignment. Fourier transform infrared spectroscopy and X-ray photoelectron spectroscopy measurements were used to characterize the film composition, which could be varied to control the surface energy by adjusting the monomer feed rate and input power. The results of LC alignment experiments showed that the pretilt angle continuously increased from 0 Degree-Sign to 90 Degree-Sign with decreasing film surface energy.
ERIC Educational Resources Information Center
Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.
2006-01-01
Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…
Linear stability of multiple internal solitary waves in fluids of great depth
NASA Astrophysics Data System (ADS)
Matsuno, Y.; Kaup, D. J.
1997-02-01
The linear stability of the multiple solitary wave solution of the Benjamin-Ono (BO) equation is studied analytically. By establishing the completeness relation for the eigenfunctions of the BO equation linearized about multisoliton solutions, we solve the initial value problem for this system. We find that the wave under consideration is stable against infinitesimal perturbations.
NASA Astrophysics Data System (ADS)
Liu, Yuan-Hao; Nievaart, Sander; Tsai, Pi-En; Liu, Hong-Ming; Moss, Ray; Jiang, Shiang-Huei
2009-01-01
In order to provide an improved and reliable neutron source description for treatment planning in boron neutron capture therapy (BNCT), a spectrum adjustment procedure named coarse-scaling adjustment has been developed and applied to the neutron spectrum measurements of both the Tsing Hua Open-pool Reactor (THOR) epithermal neutron beam in Taiwan and the High Flux Reactor (HFR) in The Netherlands, using multiple activation detectors. The coarse-scaling adjustment utilizes a similar idea as the well-known two-foil method, which adjusts the thermal and epithermal neutron fluxes according to the Maxwellian distribution for thermal neutrons and 1/ E distribution over the epithermal neutron energy region. The coarse-scaling adjustment can effectively suppress the number of oscillations appearing in the adjusted spectrum and provide better smoothness. This paper also presents a sophisticated 9-step process utilizing twice the coarse-scaling adjustment which can adjust a given coarse-group spectrum into a fine-group structure, i.e. 640 groups, with satisfactory continuity and excellently matched reaction rates between measurements and calculation. The spectrum adjustment algorithm applied in this study is the same as the well-known SAND-II.
Input-output description of linear systems with multiple time-scales
NASA Technical Reports Server (NTRS)
Madriz, R. S.; Sastry, S. S.
1984-01-01
It is pointed out that the study of systems evolving at multiple time-scales is simplified by studying reduced-order models of these systems valid at specific time-scales. The present investigation is concerned with an extension of results on the time-scale decomposition of autonomous systems to that of input-output systems. The results are employed to study conditions under which positive realness of a transfer function is preserved under singular perturbation. Attention is given to the perturbation theory for linear operators, the multiple time-scale structure of autonomous linear systems, the input-output description of two time-scale linear systems, the positive realness of two time-scale systems, and multiple time-scale linear systems.
Lourenco, D A L; Tsuruta, S; Fragomeni, B O; Chen, C Y; Herring, W O; Misztal, I
2016-03-01
, except for , which was 1 percentage point less accurate. Accuracy of GEBV for number of stillborns in F1 was 0.5 for all tested genomic relationship matrices with no changes after tuning. We observed that genotyping F increased accuracies of GEBV for the same animals by up to 39% compared with having genotypes for only AA and BB. In crossbreed evaluations, accounting for breed-specific allele frequencies promoted changes in G that were not influential enough to improve accuracy of GEBV. Therefore, the best performance of ssGBLUP for crossbreed evaluations requires genotypes for pure- and crossbreeds and no breed-specific adjustments in the realized relationship matrix. PMID:27065253
NASA Astrophysics Data System (ADS)
Liu, Pudong; Shi, Runhe; Wang, Hong; Bai, Kaixu; Gao, Wei
2014-10-01
Leaf pigments are key elements for plant photosynthesis and growth. Traditional manual sampling of these pigments is labor-intensive and costly, which also has the difficulty in capturing their temporal and spatial characteristics. The aim of this work is to estimate photosynthetic pigments at large scale by remote sensing. For this purpose, inverse model were proposed with the aid of stepwise multiple linear regression (SMLR) analysis. Furthermore, a leaf radiative transfer model (i.e. PROSPECT model) was employed to simulate the leaf reflectance where wavelength varies from 400 to 780 nm at 1 nm interval, and then these values were treated as the data from remote sensing observations. Meanwhile, simulated chlorophyll concentration (Cab), carotenoid concentration (Car) and their ratio (Cab/Car) were taken as target to build the regression model respectively. In this study, a total of 4000 samples were simulated via PROSPECT with different Cab, Car and leaf mesophyll structures as 70% of these samples were applied for training while the last 30% for model validation. Reflectance (r) and its mathematic transformations (1/r and log (1/r)) were all employed to build regression model respectively. Results showed fair agreements between pigments and simulated reflectance with all adjusted coefficients of determination (R2) larger than 0.8 as 6 wavebands were selected to build the SMLR model. The largest value of R2 for Cab, Car and Cab/Car are 0.8845, 0.876 and 0.8765, respectively. Meanwhile, mathematic transformations of reflectance showed little influence on regression accuracy. We concluded that it was feasible to estimate the chlorophyll and carotenoids and their ratio based on statistical model with leaf reflectance data.
Kammeyer-Mueller, John D; Wanberg, Connie R
2003-10-01
This 4-wave longitudinal study of newcomers in 7 organizations examined preentry knowledge, proactive personality, and socialization influences as antecedents of both proximal (task mastery, role clarity, work group integration, and political knowledge) and distal (organizational commitment, work withdrawal, and turnover) indicators of newcomer adjustment. Results suggest that preentry knowledge, proactive personality, and socialization influences from the organization, supervisors, and coworkers are independently related to proximal adjustment outcomes, consistent with a theoretical framework highlighting distinct dimensions of organizational and work task adjustment. The proximal adjustment outcomes partially mediated most of the relationships between the antecedents of adjustment and organizational commitment, work withdrawal, and turnover. PMID:14516244
NASA Astrophysics Data System (ADS)
Winkler, Peter; Bergmann, Helmar; Stuecklschweiger, Georg; Guss, Helmuth
2003-05-01
Mechanical stability and precise adjustment of rotation axes, collimator and room lasers are essential for the success of radiotherapy and particularly stereotactic radiosurgery with a linear accelerator. Quality assurance procedures, at present mainly based on visual tests and radiographic film evaluations, should desirably be little time consuming and highly accurate. We present a method based on segmentation and analysis of digital images acquired with an electronic portal imaging device (EPID) that meets these objectives. The method can be employed for routine quality assurance with a square field formed by the built-in collimator jaws as well as with a circular field using an external drill hole collimator. A number of tests, performed to evaluate accuracy and reproducibility of the algorithm, yielded very satisfying results. Studies performed over a period of 18 months prove the applicability of the inspected accelerator for stereotactic radiosurgery.
Shu, D.
1998-07-16
A novel laser Doppler linear encoder system (LDLE) has been developed at the Advanced Photon Source, Argonne National Laboratory. A self-aligning 3-D multiple-reflection optical design was used for the laser Doppler displacement meter (LDDM) to extend the encoder system resolution. The encoder is compact [about 70 mm(H) x 100 mm(W) x 250 mm(L)] and it has sub-Angstrom resolution, 100 mm/sec measuring speed, and 300 mm measuring range. Because the new device affords higher resolution, as compared with commercial laser interferometer systems, and yet cost less, it will have good potential for use in scientific and industrial applications.
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Stiros, Stathis
2012-11-01
The adjustment of systems of highly non-linear, redundant equations, deriving from observations of certain geophysical processes and geodetic data cannot be based on conventional least-squares techniques, and is based on various numerical inversion techniques. Still these techniques lead to solutions trapped in local minima, to correlated estimates and to solution with poor error control. To overcome these problems, we propose an alternative numerical-topological approach inspired by lighthouse beacon navigation, usually used in 2-D, low-accuracy applications. In our approach, an m-dimensional grid G of points around the real solution (an m-dimensional vector) is at first specified. Then, for each equation an uncertainty is assigned to the corresponding measurement, and the sets of the grid points which satisfy the condition are detected. This process is repeated for all equations, and the common section A of the sets of grid points is defined. From this set of grid points, which define a space including the real solution, we compute its center of weight, which corresponds to an estimate of the solution, and its variance-covariance matrix. An optimal solution can be obtained through optimization of the uncertainty in each observation. The efficiency of the overall process was assessed in comparison with conventional least squares adjustment.
Children and adolescents adjustment to parental multiple sclerosis: a systematic review
2014-01-01
Background Families are the primary source of support and care for most children. In Western societies, 4 to 12% of children live in households where a parent has a chronic illness. Exposure to early-life stressors, including parenting stress, parental depression and parental chronic disease could lead to harmful changes in children’s social, emotional or behavioural functioning. Little is known about the child living with a parent who has Multiple Sclerosis (MS). We systematically reviewed the literature regarding possible effects of having a parent with MS on the child’s or adolescent's psychosocial adjustment. Methods The following databases: MEDLINE, PsychInfo, CINAHL, EMBASE, Web of Knowledge, ERIC, and ProQuest Digital Dissertations were searched (from 1806 to December 2012). References from relevant articles were also manually searched. Selected studies were evaluated using the Graphic Appraisal Tool for Epidemiology (GATE). Results The search yielded 3133 titles; 70 articles were selected for full text review. Eighteen studies met inclusion criteria. Fourteen studies employed quantitative techniques, of which 13 were cross-sectional and one was longitudinal. Four studies were both qualitative and cross-sectional in design. Only 2 of 18 studies were rated as having high methodological quality. Overall, eight studies reported that children of MS patients exhibited negative psychosocial traits compared with children of “healthy” parents. Specifically for adolescents, greater family responsibilities were linked to lower social relationships and higher distress. Three studies indicated that parental MS was associated with positive adjustment in children and adolescents, such as higher personal competence, while four found no statistically significant differences. Conclusion Although having a parent with MS was often reported to have negative psychosocial effects on children and adolescents, there was a lack of consensus and some positive aspects were
Illusion of Linearity in Geometry: Effect in Multiple-Choice Problems
ERIC Educational Resources Information Center
Vlahovic-Stetic, Vesna; Pavlin-Bernardic, Nina; Rajter, Miroslav
2010-01-01
The aim of this study was to examine if there is a difference in the performance on non-linear problems regarding age, gender, and solving situation, and whether the multiple-choice answer format influences students' thinking. A total of 112 students, aged 15-16 and 18-19, were asked to solve problems for which solutions based on proportionality…
A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants
ERIC Educational Resources Information Center
Cooper, Paul D.
2010-01-01
A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…
Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...
Application of wavelet-based multiple linear regression model to rainfall forecasting in Australia
NASA Astrophysics Data System (ADS)
He, X.; Guan, H.; Zhang, X.; Simmons, C.
2013-12-01
In this study, a wavelet-based multiple linear regression model is applied to forecast monthly rainfall in Australia by using monthly historical rainfall data and climate indices as inputs. The wavelet-based model is constructed by incorporating the multi-resolution analysis (MRA) with the discrete wavelet transform and multiple linear regression (MLR) model. The standardized monthly rainfall anomaly and large-scale climate index time series are decomposed using MRA into a certain number of component subseries at different temporal scales. The hierarchical lag relationship between the rainfall anomaly and each potential predictor is identified by cross correlation analysis with a lag time of at least one month at different temporal scales. The components of predictor variables with known lag times are then screened with a stepwise linear regression algorithm to be selectively included into the final forecast model. The MRA-based rainfall forecasting method is examined with 255 stations over Australia, and compared to the traditional multiple linear regression model based on the original time series. The models are trained with data from the 1959-1995 period and then tested in the 1996-2008 period for each station. The performance is compared with observed rainfall values, and evaluated by common statistics of relative absolute error and correlation coefficient. The results show that the wavelet-based regression model provides considerably more accurate monthly rainfall forecasts for all of the selected stations over Australia than the traditional regression model.
Acceleration of multiple solution of a boundary value problem involving a linear algebraic system
NASA Astrophysics Data System (ADS)
Gazizov, Talgat R.; Kuksenko, Sergey P.; Surovtsev, Roman S.
2016-06-01
Multiple solution of a boundary value problem that involves a linear algebraic system is considered. New approach to acceleration of the solution is proposed. The approach uses the structure of the linear system matrix. Particularly, location of entries in the right columns and low rows of the matrix, which undergo variation due to the computing in the range of parameters, is used to apply block LU decomposition. Application of the approach is considered on the example of multiple computing of the capacitance matrix by method of moments used in numerical electromagnetics. Expressions for analytic estimation of the acceleration are presented. Results of the numerical experiments for solution of 100 linear systems with matrix orders of 1000, 2000, 3000 and different relations of variated and constant entries of the matrix show that block LU decomposition can be effective for multiple solution of linear systems. The speed up compared to pointwise LU factorization increases (up to 15) for larger number and order of considered systems with lower number of variated entries.
Wang, Ching-Yun; Dieu Tapsoba, Jean De; Duggan, Catherine; Campbell, Kristin L; McTiernan, Anne
2016-05-10
In many biomedical studies, covariates of interest may be measured with errors. However, frequently in a regression analysis, the quantiles of the exposure variable are often used as the covariates in the regression analysis. Because of measurement errors in the continuous exposure variable, there could be misclassification in the quantiles for the exposure variable. Misclassification in the quantiles could lead to bias estimation in the association between the exposure variable and the outcome variable. Adjustment for misclassification will be challenging when the gold standard variables are not available. In this paper, we develop two regression calibration estimators to reduce bias in effect estimation. The first estimator is normal likelihood-based. The second estimator is linearization-based, and it provides a simple and practical correction. Finite sample performance is examined via a simulation study. We apply the methods to a four-arm randomized clinical trial that tested exercise and weight loss interventions in women aged 50-75years. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26593772
Conneely, Karen N.; Boehnke, Michael
2011-01-01
Meta-analysis has become a key component of well-designed genetic association studies due to the boost in statistical power achieved by combining results across multiple samples of individuals and the need to validate observed associations in independent studies. Meta-analyses of genetic association studies based on multiple SNPs and traits are subject to the same multiple testing issues as single-sample studies, but it is often difficult to adjust accurately for the multiple tests. Procedures such as Bonferroni may control the type I error rate but will generally provide an overly harsh correction if SNPs or traits are correlated. Depending on study design, availability of individual-level data, and computational requirements, permutation testing may not be feasible in a meta-analysis framework. In this paper we present methods for adjusting for multiple correlated tests under several study designs commonly employed in meta-analyses of genetic association tests. Our methods are applicable to both prospective meta-analyses in which several samples of individuals are analyzed with the intent to combine results, and retrospective meta-analyses, in which results from published studies are combined, including situations in which 1) individual-level data are unavailable, and 2) different sets of SNPs are genotyped in different studies due to random missingness or two-stage design. We show through simulation that our methods accurately control the rate of type I error and achieve improved power over multiple testing adjustments that do not account for correlation between SNPs or traits. PMID:20878715
NASA Astrophysics Data System (ADS)
Eason, R. P.; Sun, C.; Dick, A. J.; Nagarajaiah, S.
2015-05-01
Response attenuation of a linear primary structure (PS)-nonlinear tuned mass damper (NTMD) dynamic system with and without an adaptive-length pendulum tuned mass damper (ALPTMD) in a series configuration is studied by using numerical and experimental methods. In the PS-NTMD system, coexisting high and low amplitude solutions are observed in the experiment, validating previous numerical efforts. In order to eliminate the potentially dangerous high amplitude solutions, a series ALPTMD with a mass multiple orders of magnitude smaller than the PS is added to the NTMD. The ALPTMD is used in order to represent the steady-state behavior of a smart tuned mass damper (STMD). In the experiment, the length of the pendulum is adjusted such that its natural frequency matches the dominant frequency of the harmonic ground motions. In the present study, the proposed ALPTMD can be locked so that it is unable to oscillate and influence the dynamics of the system in order to obtain the benefits provided by the NTMD. The experimental data show good qualitative agreement with numerical predictions computed with parameter continuation and time integration methods. Activation of the ALPTMD can successfully prevent the transition of the response from the low amplitude solution to the high amplitude solution or return the response from the high amplitude solution to the low amplitude solution, thereby protecting the PS.
Multiple Shooting-Local Linearization method for the identification of dynamical systems
NASA Astrophysics Data System (ADS)
Carbonell, F.; Iturria-Medina, Y.; Jimenez, J. C.
2016-08-01
The combination of the multiple shooting strategy with the generalized Gauss-Newton algorithm turns out in a recognized method for estimating parameters in ordinary differential equations (ODEs) from noisy discrete observations. A key issue for an efficient implementation of this method is the accurate integration of the ODE and the evaluation of the derivatives involved in the optimization algorithm. In this paper, we study the feasibility of the Local Linearization (LL) approach for the simultaneous numerical integration of the ODE and the evaluation of such derivatives. This integration approach results in a stable method for the accurate approximation of the derivatives with no more computational cost than that involved in the integration of the ODE. The numerical simulations show that the proposed Multiple Shooting-Local Linearization method recovers the true parameters value under different scenarios of noisy data.
User's Guide to the Weighted-Multiple-Linear Regression Program (WREG version 1.0)
Eng, Ken; Chen, Yin-Yu; Kiang, Julie.E.
2009-01-01
Streamflow is not measured at every location in a stream network. Yet hydrologists, State and local agencies, and the general public still seek to know streamflow characteristics, such as mean annual flow or flood flows with different exceedance probabilities, at ungaged basins. The goals of this guide are to introduce and familiarize the user with the weighted multiple-linear regression (WREG) program, and to also provide the theoretical background for program features. The program is intended to be used to develop a regional estimation equation for streamflow characteristics that can be applied at an ungaged basin, or to improve the corresponding estimate at continuous-record streamflow gages with short records. The regional estimation equation results from a multiple-linear regression that relates the observable basin characteristics, such as drainage area, to streamflow characteristics.
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Non-linear primary-multiple separation with directional curvelet frames
NASA Astrophysics Data System (ADS)
Herrmann, Felix J.; Böniger, Urs; Verschuur, Dirk Jacob (Eric)
2007-08-01
Predictive multiple suppression methods consist of two main steps: a prediction step, during which multiples are predicted from seismic data, and a primary-multiple separation step, during which the predicted multiples are `matched' with the true multiples in the data and subsequently removed. This second separation step, which we will call the estimation step, is crucial in practice: an incorrect separation will cause residual multiple energy in the result or may lead to a distortion of the primaries, or both. To reduce these adverse effects, a new transformed-domain method is proposed where primaries and multiples are separated rather than matched. This separation is carried out on the basis of differences in the multiscale and multidirectional characteristics of these two signal components. Our method uses the curvelet transform, which maps multidimensional data volumes into almost orthogonal localized multidimensional prototype waveforms that vary in directional and spatio-temporal content. Primaries-only and multiples-only signal components are recovered from the total data volume by a non-linear optimization scheme that is stable under noisy input data. During the optimization, the two signal components are separated by enhancing sparseness (through weighted l1-norms) in the transformed domain subject to fitting the observed data as the sum of the separated components to within a user-defined tolerance level. Whenever, during the optimization, the estimates for the primaries in the transformed domain correlate with the predictions for the multiples, the recovery of the coefficients for the estimated primaries will be suppressed while for regions where the correlation is small the method seeks the sparsest set of coefficients that represent the estimation for the primaries. Our algorithm does not seek a matched filter and as such it differs fundamentally from traditional adaptive subtraction methods. The method derives its stability from the sparseness
Optimal linear combinations of multiple diagnostic biomarkers based on Youden index.
Yin, Jingjing; Tian, Lili
2014-04-15
In practice, usually multiple biomarkers are measured on the same subject for disease diagnosis. Combining these biomarkers into a single score could improve diagnostic accuracy. Many researchers have addressed the problem of finding the optimal linear combination based on maximizing the area under ROC curve (AUC). Actually, such combined score might have less than optimal property at the diagnostic threshold. In this paper, we propose the idea of using Youden index as an objective function for searching the optimal linear combination. The combined score directly achieves the maximum overall correct classification rate at the diagnostic threshold corresponding to Youden index; in other words, it is the optimal linear combination score for making the disease diagnosis. We present both empirical and numerical searching methods for the optimal linear combination. We carry out extensive simulation study to investigate the performance of the proposed methods. Additionally, we empirically compare the optimal overall classification rates between the proposed combination based on Youden index and the traditional one based on AUC and demonstrate a significant gain in diagnostic accuracy for the proposed combination. In the end, we apply the proposed methods to a real data set. PMID:24311111
ERIC Educational Resources Information Center
Godbout, Natacha; Sabourin, Stephane; Lussier, Yvan
2009-01-01
This study compared the usefulness of single- and multiple-indicator strategies in a model examining the role of child sexual abuse (CSA) to predict later marital satisfaction through attachment and psychological distress. The sample included 1,092 women and men from a nonclinical population in cohabiting or marital relationships. The single-item…
Multiple Social Identities and Adjustment in Young Adults from Ethnically Diverse Backgrounds
ERIC Educational Resources Information Center
Kiang, Lisa; Yip, Tiffany; Fuligni, Andrew J.
2008-01-01
A person-centered approach was used to determine how identification across multiple social domains (ethnic, American, family, religious) was associated with distinct identity clusters. Utilizing data from 222 young adults from European, Filipino, Latin, and Asian American backgrounds, four clusters were found (Many Social Identities, Blended/Low…
Exponential stability analysis of linear systems with multiple successive delay components
NASA Astrophysics Data System (ADS)
Lin, Chun-Pi; Fong, I.-Kong
2013-06-01
A general class of linear systems with multiple successive delay components is considered in this article. The delays are assumed to vary in intervals, and delay-dependent exponential stability conditions are derived in terms of linear matrix inequalities. To reduce conservativeness, a new Lyapunov-Krasovskii functional is designed to contain more complete state information, so that a derivation procedure with time-varying delays treated as uncertain parameters can be adopted. Usage of slack variables and inequalities are refrained as much as possible when bounds on the Lyapunov derivative are sought. The stability criteria are tested by two popular numerical examples, with less conservative results obtained in all the checked cases. Besides, a practical application of the derived conditions is illustrated.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
Modeling and assessing the influence of linear energy transfer on multiple bit upset susceptibility
NASA Astrophysics Data System (ADS)
Geng, Chao; Liu, Jie; Xi, Kai; Zhang, Zhan-Gang; Gu, Song; Liu, Tian-Qi
2013-10-01
The influence of the metric of linear energy transfer (LET) on single event upset (SEU), particularly multiple bit upset (MBU) in a hypothetical 90-nm static random access memory (SRAM) is explored. To explain the odd point of higher LET incident ion but induced lower cross section in the curve of SEU cross section, MBUs induced by incident ions 132Xe and 209Bi with the same LET but different energies at oblique incidence are investigated using multi-functional package for single event effect analysis (MUFPSA). In addition, a comprehensive analytical model of the radial track structure is incorporated into MUFPSA, which is a complementation for assessing and interpreting MBU susceptibility of SRAM. The results show that (i) with the increase of incident angle, MBU multiplicity and probability each present an increasing trend; (ii) due to the higher ion relative velocity and longer range of δ electrons, higher energy ions trigger the MBU with less probability than lower energy ions.
Linard, Joshua I.
2013-01-01
Mitigating the effects of salt and selenium on water quality in the Grand Valley and lower Gunnison River Basin in western Colorado is a major concern for land managers. Previous modeling indicated means to improve the models by including more detailed geospatial data and a more rigorous method for developing the models. After evaluating all possible combinations of geospatial variables, four multiple linear regression models resulted that could estimate irrigation-season salt yield, nonirrigation-season salt yield, irrigation-season selenium yield, and nonirrigation-season selenium yield. The adjusted r-squared and the residual standard error (in units of log-transformed yield) of the models were, respectively, 0.87 and 2.03 for the irrigation-season salt model, 0.90 and 1.25 for the nonirrigation-season salt model, 0.85 and 2.94 for the irrigation-season selenium model, and 0.93 and 1.75 for the nonirrigation-season selenium model. The four models were used to estimate yields and loads from contributing areas corresponding to 12-digit hydrologic unit codes in the lower Gunnison River Basin study area. Each of the 175 contributing areas was ranked according to its estimated mean seasonal yield of salt and selenium.
Pierce, Wendy K; Grace, Christy R; Lee, Jihun; Nourse, Amanda; Marzahn, Melissa R; Watson, Edmond R; High, Anthony A; Peng, Junmin; Schulman, Brenda A; Mittag, Tanja
2016-03-27
Primary sequence motifs, with millimolar affinities for binding partners, are abundant in disordered protein regions. In multivalent interactions, such weak linear motifs can cooperate to recruit binding partners via avidity effects. If linear motifs recruit modifying enzymes, optimal placement of weak motifs may regulate access to modification sites. Weak motifs may thus exert physiological relevance stronger than that suggested by their affinities, but molecular mechanisms of their function are still poorly understood. Herein, we use the N-terminal disordered region of the Hedgehog transcriptional regulator Gli3 (Gli3(1-90)) to determine the role of weak motifs encoded in its primary sequence for the recruitment of its ubiquitin ligase CRL3(SPOP) and the subsequent effect on ubiquitination efficiency. The substrate adaptor SPOP binds linear motifs through its MATH (meprin and TRAF homology) domain and forms higher-order oligomers through its oligomerization domains, rendering SPOP multivalent for its substrates. Gli3 has multiple weak SPOP binding motifs. We map three such motifs in Gli3(1-90), the weakest of which has a millimolar dissociation constant. Multivalency of ligase and substrate for each other facilitates enhanced ligase recruitment and stimulates Gli3(1-90) ubiquitination in in vitro ubiquitination assays. We speculate that the weak motifs enable processivity through avidity effects and by providing steric access to lysine residues that are otherwise not prioritized for polyubiquitination. Weak motifs may generally be employed in multivalent systems to act as gatekeepers regulating post-translational modification. PMID:26475525
NASA Astrophysics Data System (ADS)
Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.
2009-02-01
We present a detailed study of the statistical properties of the Agent Based Model introduced in paper I [Eur. Phys. J. B, DOI: 10.1140/epjb/e2009-00028-4] and of its generalization to the multiplicative dynamics. The aim of the model is to consider the minimal elements for the understanding of the origin of the stylized facts and their self-organization. The key elements are fundamentalist agents, chartist agents, herding dynamics and price behavior. The first two elements correspond to the competition between stability and instability tendencies in the market. The herding behavior governs the possibility of the agents to change strategy and it is a crucial element of this class of models. We consider a linear approximation for the price dynamics which permits a simple interpretation of the model dynamics and, for many properties, it is possible to derive analytical results. The generalized non linear dynamics results to be extremely more sensible to the parameter space and much more difficult to analyze and control. The main results for the nature and self-organization of the stylized facts are, however, very similar in the two cases. The main peculiarity of the non linear dynamics is an enhancement of the fluctuations and a more marked evidence of the stylized facts. We will also discuss some modifications of the model to introduce more realistic elements with respect to the real markets.
McElvain, Lauren E.; Faulstich, Michael; Jeanne, James M.; Moore, Jeffrey D.; du Lac, Sascha
2015-01-01
Summary Signal transfer in neural circuits is dynamically modified by the recent history of neuronal activity. Short-term plasticity endows synapses with nonlinear transmission properties, yet synapses in sensory and motor circuits are capable of signaling linearly over a wide range of presynaptic firing rates. How do such synapses achieve rate-invariant transmission despite history-dependent nonlinearities? Here, ultrastructural, biophysical, and computational analyses demonstrate that concerted molecular, anatomical, and physiological refinements are required for central vestibular nerve synapses to linearly transmit rate-coded sensory signals. Vestibular synapses operate in a physiological regime of steady-state depression imposed by tonic firing. Rate-invariant transmission relies on brief presynaptic action potentials that delimit calcium influx, large pools of rapidly mobilized vesicles, multiple low-probability release sites, robust postsynaptic receptor sensitivity, and efficient transmitter clearance. Broadband linear synaptic filtering of head motion signals is thus achieved by coordinately tuned synaptic machinery that maintains physiological operation within inherent cell biological limitations. PMID:25704949
ERIC Educational Resources Information Center
Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.
2013-01-01
This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)
Multiple-source models for electron beams of a medical linear accelerator using BEAMDP computer code
Jabbari, Nasrollah; Barati, Amir Hoshang; Rahmatnezhad, Leili
2012-01-01
Aim The aim of this work was to develop multiple-source models for electron beams of the NEPTUN 10PC medical linear accelerator using the BEAMDP computer code. Background One of the most accurate techniques of radiotherapy dose calculation is the Monte Carlo (MC) simulation of radiation transport, which requires detailed information of the beam in the form of a phase-space file. The computing time required to simulate the beam data and obtain phase-space files from a clinical accelerator is significant. Calculation of dose distributions using multiple-source models is an alternative method to phase-space data as direct input to the dose calculation system. Materials and methods Monte Carlo simulation of accelerator head was done in which a record was kept of the particle phase-space regarding the details of the particle history. Multiple-source models were built from the phase-space files of Monte Carlo simulations. These simplified beam models were used to generate Monte Carlo dose calculations and to compare those calculations with phase-space data for electron beams. Results Comparison of the measured and calculated dose distributions using the phase-space files and multiple-source models for three electron beam energies showed that the measured and calculated values match well each other throughout the curves. Conclusion It was found that dose distributions calculated using both the multiple-source models and the phase-space data agree within 1.3%, demonstrating that the models can be used for dosimetry research purposes and dose calculations in radiotherapy. PMID:24377026
Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Sahimi, Muhammad
2016-03-01
In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.
Eloyan, Ani; Shou, Haochang; Shinohara, Russell T.; Sweeney, Elizabeth M.; Nebel, Mary Beth; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Lindquist, Martin A.; Crainiceanu, Ciprian M.
2014-01-01
Brain lesion localization in multiple sclerosis (MS) is thought to be associated with the type and severity of adverse health effects. However, several factors hinder statistical analyses of such associations using large MRI datasets: 1) spatial registration algorithms developed for healthy individuals may be less effective on diseased brains and lead to different spatial distributions of lesions; 2) interpretation of results requires the careful selection of confounders; and 3) most approaches have focused on voxel-wise regression approaches. In this paper, we evaluated the performance of five registration algorithms and observed that conclusions regarding lesion localization can vary substantially with the choice of registration algorithm. Methods for dealing with confounding factors due to differences in disease duration and local lesion volume are introduced. Voxel-wise regression is then extended by the introduction of a metric that measures the distance between a patient-specific lesion mask and the population prevalence map. PMID:25233361
Sun Wei; Huang, Guo H.; Lv Ying; Li Gongchen
2012-06-15
Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate
Bjørnevik, Kjetil; Riise, Trond; Cortese, Marianna; Holmøy, Trygve; Kampman, Margitta T; Magalhaes, Sandra; Myhr, Kjell-Morten; Wolfson, Christina; Pugliatti, Maura
2016-01-01
Background: Several recent studies have found a higher risk of multiple sclerosis (MS) among people with a low level of education. This has been suggested to reflect an effect of smoking and lower vitamin D status in the social class associated with lower levels of education. Objective: The objective of this paper is to investigate the association between level of education and MS risk adjusting for the known risk factors smoking, infectious mononucleosis, indicators of vitamin D levels and body size. Methods: Within the case-control study on Environmental Factors In MS (EnvIMS), 953 MS patients and 1717 healthy controls from Norway reported educational level and history of exposure to putative environmental risk factors. Results: Higher level of education were associated with decreased MS risk (p trend = 0.001) with an OR of 0.53 (95% CI 0.41–0.68) when comparing those with the highest and lowest level of education. This association was only moderately reduced after adjusting for known risk factors (OR 0.61, 95% CI 0.44–0.83). The estimates remained similar when cases with disease onset before age 28 were excluded. Conclusion: These findings suggest that factors related to lower socioeconomic status other than established risk factors are associated with MS risk. PMID:26014605
Sugama, H.; Watanabe, T.-H.; Nunami, M.
2009-11-15
Linearized model collision operators for multiple ion species plasmas are presented that conserve particles, momentum, and energy and satisfy adjointness relations and Boltzmann's H-theorem even for collisions between different particle species with unequal temperatures. The model collision operators are also written in the gyrophase-averaged form that can be applied to the gyrokinetic equation. Balance equations for the turbulent entropy density, the energy of electromagnetic fluctuations, the turbulent transport fluxes of particle and heat, and the collisional dissipation are derived from the gyrokinetic equation including the collision term and Maxwell equations. It is shown that, in the steady turbulence, the entropy produced by the turbulent transport fluxes is dissipated in part by collisions in the nonzonal-mode region and in part by those in the zonal-mode region after the nonlinear entropy transfer from nonzonal to zonal modes.
Linear differential equations and multiple zeta-values. III. Zeta(3)
NASA Astrophysics Data System (ADS)
Zakrzewski, Michał; Żoładek, Henryk
2012-01-01
We consider the hypergeometric equation (1 - t)∂t∂t∂g + x3g = 0, whose unique analytic solution φ1(t; x) = 1 + O(t) near t = 0 for t = 1 becomes a generating function for multiple zeta values φ1(1; x) = f3(x) = 1 - ζ(3)x3 + ζ(3, 3)x6 - …. We apply the so-called WKB method to study solutions of the hypergeometric equation for large x and we calculate corresponding Stokes matrices. We prove that the function f3(x) near x = ∞ is also expressed via WKB type functions which subject to some Stokes phenomenon. This implies that f3(x) satisfies a sixth order linear differential equation with irregular singularity at infinity.
Fuzzy linear model for production optimization of mining systems with multiple entities
NASA Astrophysics Data System (ADS)
Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar
2011-12-01
Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.
First, Eric L; Gounaris, Chrysanthos E; Floudas, Christodoulos A
2012-01-23
Reaction mappings are of fundamental importance to researchers studying the mechanisms of chemical reactions and analyzing biochemical pathways. We have developed an automated method based on integer linear optimization, ILP, to identify optimal reaction mappings that minimize the number of bond changes. An alternate objective function is also proposed that minimizes the number of bond order changes. In contrast to previous approaches, our method produces mappings that respect stereochemistry. We also show how to locate multiple reaction mappings efficiently and determine which of those mappings correspond to distinct reaction mechanisms by automatically detecting molecular symmetries. We demonstrate our techniques through a number of computational studies on the GRI-Mech, KEGG LIGAND, and BioPath databases. The computational studies indicate that 99% of the 8078 reactions tested can be addressed within 1 CPU hour. The proposed framework has been incorporated into the Web tool DREAM ( http://selene.princeton.edu/dream/ ), which is freely available to the scientific community. PMID:22098204
NASA Astrophysics Data System (ADS)
Abuturab, Muhammad Rafiq
2015-11-01
A novel gyrator wavelet transform based non-linear multiple single channel information fusion and authentication is introduced. In this technique, each user channel is normalized, phase encoded, and modulated by random phase function, and then multiplexed into a single channel user ciphertext. Now, the secret channel of corresponding user is phase encoded, modulated by random phase function, and gyrator transformed, and then multiplexed into a single channel secret ciphertext. The user ciphertext and secret ciphertext are multiplied to get a single channel multiplex image and then inverse gyrator transformed. The resultant spectrum is phase- and amplitude-truncated to obtain the encrypted image and the asymmetric key, respectively. The encrypted image is a single-level 2-D discrete wavelet transformed. The information is decomposed into LL, HL, LH, and HH sub-bands. This process is repeated to obtain three sets of four sub-bands of three different images. Next, the individual sub-band of each encrypted image is fused to get four fused sub-bands. Finally, the four fused sub-bands are inverse single-level 2-D discrete wavelet transformed to obtain final encrypted image. This is the main advantage for the proposed system: using multiple individual decryption keys (authentication key, asymmetric key, secret keys, and sub-band keys) for each user not only expands the key spaces but also supplies non-linear keys to control the system security. Moreover, the orders of gyrator transform provide extra degrees of freedom. The theoretical analysis and numerical simulation results support the proposed method.
Small pitch fringe projection method with multiple linear fiber arrays for 3D shape measurement
NASA Astrophysics Data System (ADS)
Hayashi, Takumi; Fujigaki, Motoharu; Murata, Yorinobu
2014-07-01
3-D shape measurement systems by contactless method are required in the quality inspections of metal molds and electronic parts in industrial fields. A grating projection method with phase-shifting method has advantages of high precision and high speed. Recently, the size of a BGA (ball grid array) becomes smaller. So the pitch of a grating pattern projected onto the specimen should be smaller. In conventional method, fringe pattern is projected using an imaging lens. The focal depth becomes smaller in the case of reduced projection. It is therefore difficult to project a grating pattern with small pitch onto an object with large incident angles. Authors recently proposed a light source stepping method using a linear LED device. It is easy to shrink the projected grating pitch with a lens because this projection method does not use an imaging lens. The pitch of the projected grating depends on the width of the light source. There is a limit to shrink the projected grating pitch according to the size of the LED chip. In this paper, a small pitch fringe projection method with multiple linear fiber arrays for 3D shape measurement is proposed. The width of the fiber array is 30μm. It is one digit smaller than the width of the LED chip. The experimental result of 3-D shape measurement with small pitch projection with large incident angles is shown.
NASA Astrophysics Data System (ADS)
Jia, Jingfei; Kim, Hyun K.; Hielscher, Andreas H.
2015-12-01
It is well known that radiative transfer equation (RTE) provides more accurate tomographic results than its diffusion approximation (DA). However, RTE-based tomographic reconstruction codes have limited applicability in practice due to their high computational cost. In this article, we propose a new efficient method for solving the RTE forward problem with multiple light sources in an all-at-once manner instead of solving it for each source separately. To this end, we introduce here a novel linear solver called block biconjugate gradient stabilized method (block BiCGStab) that makes full use of the shared information between different right hand sides to accelerate solution convergence. Two parallelized block BiCGStab methods are proposed for additional acceleration under limited threads situation. We evaluate the performance of this algorithm with numerical simulation studies involving the Delta-Eddington approximation to the scattering phase function. The results show that the single threading block RTE solver proposed here reduces computation time by a factor of 1.5-3 as compared to the traditional sequential solution method and the parallel block solver by a factor of 1.5 as compared to the traditional parallel sequential method. This block linear solver is, moreover, independent of discretization schemes and preconditioners used; thus further acceleration and higher accuracy can be expected when combined with other existing discretization schemes or preconditioners.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
Accounting for data errors discovered from an audit in multiple linear regression.
Shepherd, Bryan E; Yu, Chang
2011-09-01
A data coordinating team performed onsite audits and discovered discrepancies between the data sent to the coordinating center and that recorded at sites. We present statistical methods for incorporating audit results into analyses. This can be thought of as a measurement error problem, where the distribution of errors is a mixture with a point mass at 0. If the error rate is nonzero, then even if the mean of the discrepancy between the reported and correct values of a predictor is 0, naive estimates of the association between two continuous variables will be biased. We consider scenarios where there are (1) errors in the predictor, (2) errors in the outcome, and (3) possibly correlated errors in the predictor and outcome. We show how to incorporate the error rate and magnitude, estimated from a random subset (the audited records), to compute unbiased estimates of association and proper confidence intervals. We then extend these results to multiple linear regression where multiple covariates may be incorrect in the database and the rate and magnitude of the errors may depend on study site. We study the finite sample properties of our estimators using simulations, discuss some practical considerations, and illustrate our methods with data from 2815 HIV-infected patients in Latin America, of whom 234 had their data audited using a sequential auditing plan. PMID:21281274
NASA Astrophysics Data System (ADS)
Li, Jun; Li, Huan; Long, Libing; Liao, Guisheng; Griffiths, Hugh
2013-12-01
A novel scheme to achieve three-dimensional (3D) target location in bistatic radar systems is evaluated. The proposed scheme develops the additional information of the bistatic radar, that is the transmit angles, to estimate the 3D coordinates of the targets by using multiple-input multiple-output techniques with a uniform circular array on transmit and a uniform linear array on receive. The transmit azimuth, transmit elevation angles and receive cone angle of the targets are first extracted from the receive data and the 3D coordinates are then calculated on the basis of these angles. The geometric dilution of precision which is based on the root Cramer-Rao bound of the angles, is derived to evaluate the performance bound of the proposed scheme. Further, an ESPRIT based algorithm is developed to estimate the 3D coordinates of the targets. The advantages of this scheme are that the hardware of the receive array is reduced and the 3D coordinates of the targets can be estimated in the absence of the range information in bistatic radar. Simulations and analysis show that the proposed scheme has potential to achieve good performance with low-frequency radar.
Multiple functional linear model for association analysis of RNA-seq with imaging
Jiang, Junhai; Lin, Nan; Guo, Shicheng; Chen, Jinyun; Xiong, Momiao
2015-01-01
Emerging integrative analysis of genomic and anatomical imaging data which has not been well developed, provides invaluable information for the holistic discovery of the genomic structure of disease and has the potential to open a new avenue for discovering novel disease susceptibility genes which cannot be identified if they are analyzed separately. A key issue to the success of imaging and genomic data analysis is how to reduce their dimensions. Most previous methods for imaging information extraction and RNA-seq data reduction do not explore imaging spatial information and often ignore gene expression variation at the genomic positional level. To overcome these limitations, we extend functional principle component analysis from one dimension to two dimensions (2DFPCA) for representing imaging data and develop a multiple functional linear model (MFLM) in which functional principal scores of images are taken as multiple quantitative traits and RNA-seq profile across a gene is taken as a function predictor for assessing the association of gene expression with images. The developed method has been applied to image and RNA-seq data of ovarian cancer and kidney renal clear cell carcinoma (KIRC) studies. We identified 24 and 84 genes whose expressions were associated with imaging variations in ovarian cancer and KIRC studies, respectively. Our results showed that many significantly associated genes with images were not differentially expressed, but revealed their morphological and metabolic functions. The results also demonstrated that the peaks of the estimated regression coefficient function in the MFLM often allowed the discovery of splicing sites and multiple isoforms of gene expressions. PMID:26753102
Feng, Danqi; Xie, Heng; Qian, Lifen; Bai, Qinhong; Sun, Junqiang
2015-06-29
We experimentally demonstrate a novel approach for microwave frequency measurement utilizing birefringence effect in the highly non-linear fiber (HNLF). A detailed theoretical analysis is presented to implement the adjustable measurement range and resolution. By stimulating a complementary polarization-domain interferometer pair in the HNLF, a mathematical expression that relates the microwave frequency and amplitude comparison function is developed. We carry out a proof-to-concept experiment. A frequency measurement range of 2.5-30 GHz with a measurement error within 0.5 GHz is achieved except 16-17.5 GHz. This method is all-optical and requires no high-speed electronic components. PMID:26191769
NASA Astrophysics Data System (ADS)
Treuer, H.; Hoevels, M.; Luyken, K.; Gierich, A.; Kocher, M.; Müller, R.-P.; Sturm, V.
2000-08-01
We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.
Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen
2012-06-01
To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities. PMID:22370050
Optimization of end-members used in multiple linear regression geochemical mixing models
NASA Astrophysics Data System (ADS)
Dunlea, Ann G.; Murray, Richard W.
2015-11-01
Tracking marine sediment provenance (e.g., of dust, ash, hydrothermal material, etc.) provides insight into contemporary ocean processes and helps construct paleoceanographic records. In a simple system with only a few end-members that can be easily quantified by a unique chemical or isotopic signal, chemical ratios and normative calculations can help quantify the flux of sediment from the few sources. In a more complex system (e.g., each element comes from multiple sources), more sophisticated mixing models are required. MATLAB codes published in Pisias et al. solidified the foundation for application of a Constrained Least Squares (CLS) multiple linear regression technique that can use many elements and several end-members in a mixing model. However, rigorous sensitivity testing to check the robustness of the CLS model is time and labor intensive. MATLAB codes provided in this paper reduce the time and labor involved and facilitate finding a robust and stable CLS model. By quickly comparing the goodness of fit between thousands of different end-member combinations, users are able to identify trends in the results that reveal the CLS solution uniqueness and the end-member composition precision required for a good fit. Users can also rapidly check that they have the appropriate number and type of end-members in their model. In the end, these codes improve the user's confidence that the final CLS model(s) they select are the most reliable solutions. These advantages are demonstrated by application of the codes in two case studies of well-studied datasets (Nazca Plate and South Pacific Gyre).
NASA Astrophysics Data System (ADS)
Montanari, A.
2006-12-01
This contribution introduces a statistically based approach for uncertainty assessment in hydrological modeling, in an optimality context. Indeed, in several real world applications, there is the need for the user to select a model that is deemed to be the best possible choice accordingly to a given goodness of fit criteria. In this case, it is extremely important to assess the model uncertainty, intended as the range around the model output within which the measured hydrological variable is expected to fall with a given probability. This indication allows the user to quantify the risk associated to a decision that is based on the model response. The technique proposed here is carried out by inferring the probability distribution of the hydrological model error through a non linear multiple regression approach, depending on an arbitrary number of selected conditioning variables. These may include the current and previous model output as well as internal state variables of the model. The purpose is to indirectly relate the model error to the sources of uncertainty, through the conditioning variables. The method can be applied to any model of arbitrary complexity, included distributed approaches. The probability distribution of the model error is derived in the Gaussian space, through a meta-Gaussian approach. The normal quantile transform is applied in order to make the marginal probability distribution of the model error and the conditioning variables Gaussian. Then the above marginal probability distributions are related through the multivariate Gaussian distribution, whose parameters are estimated via multiple regression. Application of the inverse of the normal quantile transform allows the user to derive the confidence limits of the model output for an assigned significance level. The proposed technique is valid under statistical assumptions, that are essentially those conditioning the validity of the multiple regression in the Gaussian space. Statistical tests
NASA Astrophysics Data System (ADS)
Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.
2016-06-01
A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
Multiple Use One-Sided Hypotheses Testing in Univariate Linear Calibration
NASA Technical Reports Server (NTRS)
Krishnamoorthy, K.; Kulkarni, Pandurang M.; Mathew, Thomas
1996-01-01
Consider a normally distributed response variable, related to an explanatory variable through the simple linear regression model. Data obtained on the response variable, corresponding to known values of the explanatory variable (i.e., calibration data), are to be used for testing hypotheses concerning unknown values of the explanatory variable. We consider the problem of testing an unlimited sequence of one sided hypotheses concerning the explanatory variable, using the corresponding sequence of values of the response variable and the same set of calibration data. This is the situation of multiple use of the calibration data. The tests derived in this context are characterized by two types of uncertainties: one uncertainty associated with the sequence of values of the response variable, and a second uncertainty associated with the calibration data. We derive tests based on a condition that incorporates both of these uncertainties. The solution has practical applications in the decision limit problem. We illustrate our results using an example dealing with the estimation of blood alcohol concentration based on breath estimates of the alcohol concentration. In the example, the problem is to test if the unknown blood alcohol concentration of an individual exceeds a threshold that is safe for driving.
Zheng, Jialin; Zhuang, Wei; Yan, Nian; Kou, Gang; Peng, Hui; McNally, Clancy; Erichsen, David; Cheloha, Abby; Herek, Shelley; Shi, Chris
2004-01-01
The ability to identify neuronal damage in the dendritic arbor during HIV-1-associated dementia (HAD) is crucial for designing specific therapies for the treatment of HAD. To study this process, we utilized a computer-based image analysis method to quantitatively assess HIV-1 viral protein gp120 and glutamate-mediated individual neuronal damage in cultured cortical neurons. Changes in the number of neurites, arbors, branch nodes, cell body area, and average arbor lengths were determined and a database was formed (http://dm.ist.unomaha. edu/database.htm). We further proposed a two-class model of multiple criteria linear programming (MCLP) to classify such HIV-1-mediated neuronal dendritic and synaptic damages. Given certain classes, including treatments with brain-derived neurotrophic factor (BDNF), glutamate, gp120 or non-treatment controls from our in vitro experimental systems, we used the two-class MCLP model to determine the data patterns between classes in order to gain insight about neuronal dendritic damages. This knowledge can be applied in principle to the design and study of specific therapies for the prevention or reversal of neuronal damage associated with HAD. Finally, the MCLP method was compared with a well-known artificial neural network algorithm to test for the relative potential of different data mining applications in HAD research. PMID:15365193
NASA Astrophysics Data System (ADS)
Urrutia, Jackie D.; Tampis, Razzcelle L.; Mercado, Joseph; Baygan, Aaron Vito M.; Baccay, Edcon B.
2016-02-01
The objective of this research is to formulate a mathematical model for the Philippines' Real Gross Domestic Product (Real GDP). The following factors are considered: Consumers' Spending (x1), Government's Spending (x2), Capital Formation (x3) and Imports (x4) as the Independent Variables that can actually influence in the Real GDP in the Philippines (y). The researchers used a Normal Estimation Equation using Matrices to create the model for Real GDP and used α = 0.01.The researchers analyzed quarterly data from 1990 to 2013. The data were acquired from the National Statistical Coordination Board (NSCB) resulting to a total of 96 observations for each variable. The data have undergone a logarithmic transformation particularly the Dependent Variable (y) to satisfy all the assumptions of the Multiple Linear Regression Analysis. The mathematical model for Real GDP was formulated using Matrices through MATLAB. Based on the results, only three of the Independent Variables are significant to the Dependent Variable namely: Consumers' Spending (x1), Capital Formation (x3) and Imports (x4), hence, can actually predict Real GDP (y). The regression analysis displays that 98.7% (coefficient of determination) of the Independent Variables can actually predict the Dependent Variable. With 97.6% of the result in Paired T-Test, the Predicted Values obtained from the model showed no significant difference from the Actual Values of Real GDP. This research will be essential in appraising the forthcoming changes to aid the Government in implementing policies for the development of the economy.
NASA Technical Reports Server (NTRS)
Tiwari, Surendra N.; Kathong, Monchai
1987-01-01
The feasibility of the multiple grid technique is investigated by solving linear hyperbolic equations for simple two- and three-dimensional cases. The results are compared with exact solutions and those obtained from the single grid calculations. It is demonstrated that the technique works reasonably well when two grid systems contain grid cells of comparative sizes. The study indicates that use of the multiple grid does not introduce any significant error and that it can be used to attack more complex problems.
ERIC Educational Resources Information Center
Moss-Morris, Rona; Dennison, Laura; Landau, Sabine; Yardley, Lucy; Silber, Eli; Chalder, Trudie
2013-01-01
Objective: The aims were (a) to test the effectiveness of a nurse-led cognitive behavioral therapy (CBT) program to assist adjustment in the early stages of multiple sclerosis (MS) and (b) to determine moderators of treatment including baseline distress, social support (SS), and treatment preference. Method: Ninety-four ambulatory people with MS…
NASA Astrophysics Data System (ADS)
Shu, Yuqin; Lam, Nina S. N.
2011-01-01
Detailed estimates of carbon dioxide emissions at fine spatial scales are critical to both modelers and decision makers dealing with global warming and climate change. Globally, traffic-related emissions of carbon dioxide are growing rapidly. This paper presents a new method based on a multiple linear regression model to disaggregate traffic-related CO 2 emission estimates from the parish-level scale to a 1 × 1 km grid scale. Considering the allocation factors (population density, urban area, income, road density) together, we used a correlation and regression analysis to determine the relationship between these factors and traffic-related CO 2 emissions, and developed the best-fit model. The method was applied to downscale the traffic-related CO 2 emission values by parish (i.e. county) for the State of Louisiana into 1-km 2 grid cells. In the four highest parishes in traffic-related CO 2 emissions, the biggest area that has above average CO 2 emissions is found in East Baton Rouge, and the smallest area with no CO 2 emissions is also in East Baton Rouge, but Orleans has the most CO 2 emissions per unit area. The result reveals that high CO 2 emissions are concentrated in dense road network of urban areas with high population density and low CO 2 emissions are distributed in rural areas with low population density, sparse road network. The proposed method can be used to identify the emission "hot spots" at fine scale and is considered more accurate and less time-consuming than the previous methods.
NASA Astrophysics Data System (ADS)
Indei, Tsutomu; Takimoto, Jun-ichi
2010-11-01
We have developed a single-chain theory that describes dynamics of associating polymer chains carrying multiple associative groups (or stickers) in the transient network formed by themselves and studied linear viscoelastic properties of this network. It is shown that if the average number N¯ of stickers associated with the network junction per chain is large, the terminal relaxation time τA that is proportional to τXN¯2 appears. The time τX is the interval during which an associated sticker goes back to its equilibrium position by one or more dissociation steps. In this lower frequency regime ω <1/τX, the moduli are well described in terms of the Rouse model with the longest relaxation time τA. The large value of N¯ is realized for chains carrying many stickers whose rate of association with the network junction is much larger than the dissociation rate. This associative Rouse behavior stems from the association/dissociation processes of stickers and is different from the ordinary Rouse behavior in the higher frequency regime, which is originated from the thermal segmental motion between stickers. If N¯ is not large, the dynamic shear moduli are well described in terms of the Maxwell model characterized by a single relaxation time τX in the moderate and lower frequency regimes. Thus, the transition occurs in the viscoelastic relaxation behavior from the Maxwell-type to the Rouse-type in ω <1/τX as N¯ increases. All these results are obtained under the affine deformation assumption for junction points. We also studied the effect of the junction fluctuations from the affine motion on the plateau modulus by introducing the virtual spring for bound stickers. It is shown that the plateau modulus is not affected by the junction fluctuations.
lp-lq penalty for sparse linear and sparse multiple kernel multitask learning.
Rakotomamonjy, Alain; Flamary, Rémi; Gasso, Gilles; Canu, Stéphane
2011-08-01
Recently, there has been much interest around multitask learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on l(p)-l(q) (with 0 ≤ p ≤ 1 and 1 ≤ q ≤ 2) mixed norms as sparsity-inducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the l(1)-l(q) penalty which helps us in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the l(1)-l(q) penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the l(1)-l(q) proximal operator. Then, for the more general case, when , we solve the resulting nonconvex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted l(1)-l(q) sparse MTL problem. Empirical evidences from toy dataset and real-word datasets dealing with brain-computer interface single-trial electroencephalogram classification and protein subcellular localization show the benefit of the proposed approaches and algorithms. PMID:21813358
CCD MEASUREMENTS OF DOUBLE AND MULTIPLE STARS AT NAO ROZHEN : ORBITS AND LINEAR FITS OF FIVE PAIRS
Cvetkovic, Z.; Pavlovic, R.; Damljanovic, G.; Boeva, S.
2011-09-15
Using the 2 m telescope of the Bulgarian National Astronomical Observatory at Rozhen, observations of 145 double or multiple stars were carried out during three nights on 2010 September 7-9. This is the fifth series of measurements of CCD frames of double and multiple stars obtained at Rozhen. In this paper, we present the results for the position angle and angular separation for 202 pairs and residuals for 45 pairs with published orbital elements or linear solutions. These observations have angular separations in the range 1.''150-196.''372, with a median angular separation of 57.''906. Three linear solutions are presented for the first time and three orbits are recalculated (one pair has both a linear fit and an orbital solution).
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Worachartcheewan, Apilak; Nantasenamat, Chanin; Owasirikul, Wiwat; Monnor, Teerawat; Naruepantawart, Orapan; Janyapaisarn, Sayamon; Prachayasittikul, Supaluk; Prachayasittikul, Virapong
2014-02-12
A data set of 1-adamantylthiopyridine analogs (1-19) with antioxidant activity, comprising of 2,2-diphenyl-1-picrylhydrazyl (DPPH) and superoxide dismutase (SOD) activities, was used for constructing quantitative structure-activity relationship (QSAR) models. Molecular structures were geometrically optimized at B3LYP/6-31g(d) level and subjected for further molecular descriptor calculation using Dragon software. Multiple linear regression (MLR) was employed for the development of QSAR models using 3 significant descriptors (i.e. Mor29e, F04[N-N] and GATS5v) for predicting the DPPH activity and 2 essential descriptors (i.e. EEig06r and Mor06v) for predicting the SOD activity. Such molecular descriptors accounted for the effects and positions of substituent groups (R) on the 1-adamantylthiopyridine ring. The results showed that high atomic electronegativity of polar substituent group (R = CO2H) afforded high DPPH activity, while substituent with high atomic van der Waals volumes such as R = Br gave high SOD activity. Leave-one-out cross-validation (LOO-CV) and external test set were used for model validation. Correlation coefficient (QCV) and root mean squared error (RMSECV) of the LOO-CV set for predicting DPPH activity were 0.5784 and 8.3440, respectively, while QExt and RMSEExt of external test set corresponded to 0.7353 and 4.2721, respectively. Furthermore, QCV and RMSECV values of the LOO-CV set for predicting SOD activity were 0.7549 and 5.6380, respectively. The QSAR model's equation was then used in predicting the SOD activity of tested compounds and these were subsequently verified experimentally. It was observed that the experimental activity was more potent than the predicted activity. Structure-activity relationships of significant descriptors governing antioxidant activity are also discussed. The QSAR models investigated herein are anticipated to be useful in the rational design and development of novel compounds with antioxidant activity. PMID
Hu, L.; Zhang, Z.G.; Mouraux, A.; Iannetti, G.D.
2015-01-01
Transient sensory, motor or cognitive event elicit not only phase-locked event-related potentials (ERPs) in the ongoing electroencephalogram (EEG), but also induce non-phase-locked modulations of ongoing EEG oscillations. These modulations can be detected when single-trial waveforms are analysed in the time-frequency domain, and consist in stimulus-induced decreases (event-related desynchronization, ERD) or increases (event-related synchronization, ERS) of synchrony in the activity of the underlying neuronal populations. ERD and ERS reflect changes in the parameters that control oscillations in neuronal networks and, depending on the frequency at which they occur, represent neuronal mechanisms involved in cortical activation, inhibition and binding. ERD and ERS are commonly estimated by averaging the time-frequency decomposition of single trials. However, their trial-to-trial variability that can reflect physiologically-important information is lost by across-trial averaging. Here, we aim to (1) develop novel approaches to explore single-trial parameters (including latency, frequency and magnitude) of ERP/ERD/ERS; (2) disclose the relationship between estimated single-trial parameters and other experimental factors (e.g., perceived intensity). We found that (1) stimulus-elicited ERP/ERD/ERS can be correctly separated using principal component analysis (PCA) decomposition with Varimax rotation on the single-trial time-frequency distributions; (2) time-frequency multiple linear regression with dispersion term (TF-MLRd) enhances the signal-to-noise ratio of ERP/ERD/ERS in single trials, and provides an unbiased estimation of their latency, frequency, and magnitude at single-trial level; (3) these estimates can be meaningfully correlated with each other and with other experimental factors at single-trial level (e.g., perceived stimulus intensity and ERP magnitude). The methods described in this article allow exploring fully non-phase-locked stimulus-induced cortical
Hu, L; Zhang, Z G; Mouraux, A; Iannetti, G D
2015-05-01
Transient sensory, motor or cognitive event elicit not only phase-locked event-related potentials (ERPs) in the ongoing electroencephalogram (EEG), but also induce non-phase-locked modulations of ongoing EEG oscillations. These modulations can be detected when single-trial waveforms are analysed in the time-frequency domain, and consist in stimulus-induced decreases (event-related desynchronization, ERD) or increases (event-related synchronization, ERS) of synchrony in the activity of the underlying neuronal populations. ERD and ERS reflect changes in the parameters that control oscillations in neuronal networks and, depending on the frequency at which they occur, represent neuronal mechanisms involved in cortical activation, inhibition and binding. ERD and ERS are commonly estimated by averaging the time-frequency decomposition of single trials. However, their trial-to-trial variability that can reflect physiologically-important information is lost by across-trial averaging. Here, we aim to (1) develop novel approaches to explore single-trial parameters (including latency, frequency and magnitude) of ERP/ERD/ERS; (2) disclose the relationship between estimated single-trial parameters and other experimental factors (e.g., perceived intensity). We found that (1) stimulus-elicited ERP/ERD/ERS can be correctly separated using principal component analysis (PCA) decomposition with Varimax rotation on the single-trial time-frequency distributions; (2) time-frequency multiple linear regression with dispersion term (TF-MLRd) enhances the signal-to-noise ratio of ERP/ERD/ERS in single trials, and provides an unbiased estimation of their latency, frequency, and magnitude at single-trial level; (3) these estimates can be meaningfully correlated with each other and with other experimental factors at single-trial level (e.g., perceived stimulus intensity and ERP magnitude). The methods described in this article allow exploring fully non-phase-locked stimulus-induced cortical
CCD MEASUREMENTS OF DOUBLE AND MULTIPLE STARS AT NAO ROZHEN AND ASV IN 2011. FIVE LINEAR SOLUTIONS
Pavlovic, R.; Cvetkovic, Z.; Vince, O.; Stojanovic, M.; Boeva, S.
2013-09-15
Using the 2 m telescope of the Bulgarian National Astronomical Observatory at Rozhen, observations of 222 double or multiple stars were carried out during three nights in 2011. This is the sixth series of measurements of CCD frames of double and multiple stars obtained at Rozhen. Also in 2011, using the 0.6 m telescope of the Serbian Astronomical Station on the mountain of Vidojevica, observations of 208 double or multiple stars were carried out during six nights. This is the first series of measurements of CCD frames of double and multiple stars obtained at this station. In this paper, we present the results for the position angle and angular separation for 337 pairs and residuals for 72 pairs with published orbital elements or linear solutions. These observations have angular separations in the range from 1.''37 to 172.''81, with a median angular separation of 7.''66. We also present the recalculated linear solutions for four pairs and one linear solution that has been calculated for the first time.
CCD Measurements of Double and Multiple Stars at NAO Rozhen and ASV in 2012. Four Linear Solutions
NASA Astrophysics Data System (ADS)
Cvetković, Z.; Pavlović, R.; Boeva, S.
2015-05-01
Using the 2 m telescope of the Bulgarian National Astronomical Observatory at Rozhen, observations of 246 double or multiple stars were carried out during six nights in 2012. This is the seventh series of measurements of CCD frames of double and multiple stars obtained at Rozhen. Also in 2012, using the 0.6 m telescope of the Serbian Astronomical Station on the mountain of Vidojevica, observations of 117 double or multiple stars were carried out during five nights. This is the second series of measurements of CCD frames of double and multiple stars obtained at this station. In this paper we present the results for the position angle and angular separation for 453 pairs and residuals for 105 pairs with published orbital elements or linear solutions. These observations have angular separations in the range from 1\\buildrel{\\prime\\prime}\\over{.} 50 to 178\\buildrel{\\prime\\prime}\\over{.} 12, with a median angular separation of 8\\buildrel{\\prime\\prime}\\over{.} 13. We also present the recalculated linear solution for one pair and three linear solutions that have been calculated for the first time. Based on observations with the 2 m RCC telescope of Rozhen National Astronomical Observatory operated by the Institute of Astronomy, Bulgarian Academy of Sciences and with the 0.6 m telescope of the Astronomical Station Vidojevica operated by the Astronomical Observatory of Belgrade.
NASA Astrophysics Data System (ADS)
Azoug, Seif Eddine; Bouguezel, Saad
2016-01-01
In this paper, a novel opto-digital image encryption technique is proposed by introducing a new non-linear preprocessing and using the multiple-parameter discrete fractional Fourier transform (MPDFrFT). The non-linear preprocessing is performed digitally on the input image in the spatial domain using a piecewise linear chaotic map (PLCM) coupled with the bitwise exclusive OR (XOR). The resulting image is multiplied by a random phase mask before applying the MPDFrFT to whiten the image. Then, a chaotic permutation is performed on the output of the MPDFrFT using another PLCM different from the one used in the spatial domain. Finally, another MPDFrFT is applied to obtain the encrypted image. The parameters of the PLCMs together with the multiple fractional orders of the MPDFrFTs constitute the secret key for the proposed cryptosystem. Computer simulation results and security analysis are presented to show the robustness of the proposed opto-digital image encryption technique and the great importance of the new non-linear preprocessing introduced to enhance the security of the cryptosystem and overcome the problem of linearity encountered in the existing permutation-based opto-digital image encryption schemes.
NASA Astrophysics Data System (ADS)
Gheorghe, Gheorghe I.; Dontu, Octavian
2008-03-01
The paper treats high precision micro technologies for automate generation of linear incremental networks masks by using the photocomposition method with multiple micro photographical reductions using laser high sensitivity microsystems, for the manufacture of micro-sensors and micro-transducers for micro displacements with endowment in industrial and metrological laboratories. These laser micro technologies allow automate generation of incremental networks masks with incremental step of 0,1 Âµm ensuring necessary accuracy according to European and international standards as well as realization of linear incremental photoelectric rules divisor and vernier as marks ultra precise components of micro-sensors and microtransducers for micro displacements.
Uncertainty due to non-linearity in radiation thermometers calibrated by multiple fixed points
Yamaguchi, Y.; Yamada, Y.
2013-09-11
A new method to estimate the uncertainty due to non-linearity is described on the n= 3 scheme basis. The expression of uncertainty is mathematically derived applying the random walk method. The expression is simple and requires only the temperatures of the fixed points and a relative uncertainty value for each flux-doubling derived from the non-linearity measurement. We also present an example of the method, in which the uncertainty of temperature measurement by a radiation thermometer is calculated on the basis of non-linearity measurement.
NASA Astrophysics Data System (ADS)
Krak, Michael D.; Dreyer, Jason T.; Singh, Rajendra
2016-03-01
A vehicle clutch damper is intentionally designed to contain multiple discontinuous non-linearities, such as multi-staged springs, clearances, pre-loads, and multi-staged friction elements. The main purpose of this practical torsional device is to transmit a wide range of torque while isolating torsional vibration between an engine and transmission. Improved understanding of the dynamic behavior of the device could be facilitated by laboratory measurement, and thus a refined vibratory experiment is proposed. The experiment is conceptually described as a single degree of freedom non-linear torsional system that is excited by an external step torque. The single torsional inertia (consisting of a shaft and torsion arm) is coupled to ground through parallel production clutch dampers, which are characterized by quasi-static measurements provided by the manufacturer. Other experimental objectives address physical dimensions, system actuation, flexural modes, instrumentation, and signal processing issues. Typical measurements show that the step response of the device is characterized by three distinct non-linear regimes (double-sided impact, single-sided impact, and no-impact). Each regime is directly related to the non-linear features of the device and can be described by peak angular acceleration values. Predictions of a simplified single degree of freedom non-linear model verify that the experiment performs well and as designed. Accordingly, the benchmark measurements could be utilized to validate non-linear models and simulation codes, as well as characterize dynamic parameters of the device including its dissipative properties.
Caballero, Julio; Fernández, Michael
2006-01-01
Antifungal activity was modeled for a set of 96 heterocyclic ring derivatives (2,5,6-trisubstituted benzoxazoles, 2,5-disubstituted benzimidazoles, 2-substituted benzothiazoles and 2-substituted oxazolo(4,5-b)pyridines) using multiple linear regression (MLR) and Bayesian-regularized artificial neural network (BRANN) techniques. Inhibitory activity against Candida albicans (log(1/C)) was correlated with 3D descriptors encoding the chemical structures of the heterocyclic compounds. Training and test sets were chosen by means of k-Means Clustering. The most appropriate variables for linear and nonlinear modeling were selected using a genetic algorithm (GA) approach. In addition to the MLR equation (MLR-GA), two nonlinear models were built, model BRANN employing the linear variable subset and an optimum model BRANN-GA obtained by a hybrid method that combined BRANN and GA approaches (BRANN-GA). The linear model fit the training set (n = 80) with r2 = 0.746, while BRANN and BRANN-GA gave higher values of r2 = 0.889 and r2 = 0.937, respectively. Beyond the improvement of training set fitting, the BRANN-GA model was superior to the others by being able to describe 87% of test set (n = 16) variance in comparison with 78 and 81% the MLR-GA and BRANN models, respectively. Our quantitative structure-activity relationship study suggests that the distributions of atomic mass, volume and polarizability have relevant relationships with the antifungal potency of the compounds studied. Furthermore, the ability of the six variables selected nonlinearly to differentiate the data was demonstrated when the total data set was well distributed in a Kohonen self-organizing neural network (KNN). PMID:16205958
NASA Astrophysics Data System (ADS)
Cover, Keith S.
It is widely believed that one of the best way to proceed when analysing data is to generate estimates which fit the data. However, when the relationship between the unknown model and data is linear for highly underdetermined systems, is it common practice to find estimates with good linear resolution with no regard for fitting the data. For example, windowed Fourier transforms produces estimates that have good linear resolution but do not fit the data. Surprisingly, many researchers do not seem to be explicitly aware of this fact. This thesis presents a theoretical basis for the linear resolution which demonstrates that, for a wide range of problems, algorithms which produce estimates with good linear resolution can be a more powerful and convenient way of presenting the information in the data, than models that fit the data. Linear resolution was also applied to two outstanding problems in linear inverse theory. The first was the problem of truncation artifacts in magnetic resonance imaging (MRI). Truncation artifacts were heavily suppressed or eliminated by the choice of one of two novel Fourier transform windows. Complete elimination of truncation artifacts generally led to unexpectedly blurry images. Heavy suppression seemed to be the best compromise between truncation artifacts and blurriness. The second problem was estimating the relaxation distribution of a multiexponential system from its decay curve. This is an example where hundreds of papers have been written on the subject, yet almost no one has made a substantial effort to apply linear resolution. I found the application to be very successful. As an example, the algorithm was applied to the decay of MRI data from multiple sclerosis patients in an attempt to differentiate between various pathologies.
Duthie, A Bradley; Bocedi, Greta; Reid, Jane M
2016-09-01
Polyandry is often hypothesized to evolve to allow females to adjust the degree to which they inbreed. Multiple factors might affect such evolution, including inbreeding depression, direct costs, constraints on male availability, and the nature of polyandry as a threshold trait. Complex models are required to evaluate when evolution of polyandry to adjust inbreeding is predicted to arise. We used a genetically explicit individual-based model to track the joint evolution of inbreeding strategy and polyandry defined as a polygenic threshold trait. Evolution of polyandry to avoid inbreeding only occurred given strong inbreeding depression, low direct costs, and severe restrictions on initial versus additional male availability. Evolution of polyandry to prefer inbreeding only occurred given zero inbreeding depression and direct costs, and given similarly severe restrictions on male availability. However, due to its threshold nature, phenotypic polyandry was frequently expressed even when strongly selected against and hence maladaptive. Further, the degree to which females adjusted inbreeding through polyandry was typically very small, and often reflected constraints on male availability rather than adaptive reproductive strategy. Evolution of polyandry solely to adjust inbreeding might consequently be highly restricted in nature, and such evolution cannot necessarily be directly inferred from observed magnitudes of inbreeding adjustment. PMID:27464756
NASA Astrophysics Data System (ADS)
Cvetković, Z.; Pavlović, R.; Boeva, S.
2016-03-01
Using the 2 m telescope of the Bulgarian National Astronomical Observatory at Rozhen, observations of 271 double or multiple stars were carried out during seven nights in 2013 and 2014. This is the eighth series of measurements of CCD frames of double and multiple stars obtained at Rozhen. Also in 2013 and 2014, using the 0.6 m telescope of the Serbian Astronomical Station on the mountain of Vidojevica, observations of 343 double or multiple stars were carried out during 21 nights. This is the third series of measurements of CCD frames of double and multiple stars obtained at this station. In this paper, we present the results for the position angle and angular separation for 721 pairs and residuals for 126 pairs with published orbital elements or linear solutions. These observations have angular separations in the range from 1.″24 to 202.″30, with a median angular separation of 7.″17. We also present eight linear solutions that have been calculated for the first time. Based on observations with the 2 m RCC telescope of the Rozhen National Astronomical Observatory operated by the Institute of Astronomy, Bulgarian Academy of Sciences and with the 0.6 m telescope of Astronomical Station Vidojevica operated by the Astronomical Observatory of Belgrade.
Co-existence of circular and multiple linear amplicons in methotrexate-resistant Leishmania.
Olmo, A; Arrebola, R; Bernier, V; González-Pacanowska, D; Ruiz-Pérez, L M
1995-01-01
Circular and linear amplicons were analyzed in detail in Leishmania tropica cells resistant to methotrexate (MTX). Both types of elements presented sequences related to the H locus and coexisted in resistant cells. The linear amplicons appeared first during the selection process (at 10 microM MTX) and varied with regard to size and structure in cells exposed to increasing concentrations of drug. The circular element was evident at higher concentrations (50 microMs) but was the major amplified DNA in cells resistant to 1000 microM MTX while the level of amplification of the linear elements remained low. The extrachromosomal DNAs were unstable in the absence of drug and their disappearance coincided with an increase in sensitivity to MTX. Mapping of the minichromosomes and the circular element showed that they were all constituted by inverted duplications. The circular amplicon contained an inverted repeat derived from the H locus that encompassed the pteridine reductase gene (PTR1) responsible for MTX resistance. The amplified segment in the linear amplicons was longer and included the pgpB and pgpC genes that encode P-glycoproteins of unknown function previously characterized in different Leishmania species. Images PMID:7659507
Point Estimates and Confidence Intervals for Variable Importance in Multiple Linear Regression
ERIC Educational Resources Information Center
Thomas, D. Roland; Zhu, PengCheng; Decady, Yves J.
2007-01-01
The topic of variable importance in linear regression is reviewed, and a measure first justified theoretically by Pratt (1987) is examined in detail. Asymptotic variance estimates are used to construct individual and simultaneous confidence intervals for these importance measures. A simulation study of their coverage properties is reported, and an…
Linear regulator design for stochastic systems by a multiple time scales method
NASA Technical Reports Server (NTRS)
Teneketzis, D.; Sandell, N. R., Jr.
1976-01-01
A hierarchically-structured, suboptimal controller for a linear stochastic system composed of fast and slow subsystems is considered. The controller is optimal in the limit as the separation of time scales of the subsystems becomes infinite. The methodology is illustrated by design of a controller to suppress the phugoid and short period modes of the longitudinal dynamics of the F-8 aircraft.
ERIC Educational Resources Information Center
Rueger, Sandra Yu; Malecki, Christine Kerres; Demaray, Michelle Kilpatrick
2010-01-01
The current study investigated gender differences in the relationship between sources of perceived support (parent, teacher, classmate, friend, school) and psychological and academic adjustment in a sample of 636 (49% male) middle school students. Longitudinal data were collected at two time points in the same school year. The study provided…
ERIC Educational Resources Information Center
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.
2009-01-01
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
NASA Astrophysics Data System (ADS)
Carniti, P.; Cassina, L.; Gotti, C.; Maino, M.; Pessina, G.
2016-07-01
In this work we present ALDO, an adjustable low drop-out linear regulator designed in AMS 0.35 μm CMOS technology. It is specifically tailored for use in the upgraded LHCb RICH detector in order to improve the power supply noise for the front end readout chip (CLARO). ALDO is designed with radiation-tolerant solutions such as an all-MOS band-gap voltage reference and layout techniques aiming to make it able to operate in harsh environments like High Energy Physics accelerators. It is capable of driving up to 200 mA while keeping an adequate power supply filtering capability in a very wide frequency range from 10 Hz up to 100 MHz. This property allows us to suppress the noise and high frequency spikes that could be generated by a DC/DC regulator, for example. ALDO also shows a very low noise of 11.6 μV RMS in the same frequency range. Its output is protected with over-current and short detection circuits for a safe integration in tightly packed environments. Design solutions and measurements of the first prototype are presented.
Isolating and Examining Sources of Suppression and Multicollinearity in Multiple Linear Regression
ERIC Educational Resources Information Center
Beckstead, Jason W.
2012-01-01
The presence of suppression (and multicollinearity) in multiple regression analysis complicates interpretation of predictor-criterion relationships. The mathematical conditions that produce suppression in regression analysis have received considerable attention in the methodological literature but until now nothing in the way of an analytic…
Confidence Intervals for an Effect Size Measure in Multiple Linear Regression
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2007-01-01
The increase in the squared multiple correlation coefficient ([Delta]R[squared]) associated with a variable in a regression equation is a commonly used measure of importance in regression analysis. The coverage probability that an asymptotic and percentile bootstrap confidence interval includes [Delta][rho][squared] was investigated. As expected,…
ERIC Educational Resources Information Center
Gagne, Phill; Furlow, Carolyn F.
2009-01-01
Simulation researchers are sometimes faced with the need to use multiple statistical software packages in the process of conducting their research, potentially having to go between software packages manually. This can be a tedious and time-consuming process that generally motivates researchers to use fewer replications in their simulations than…
ERIC Educational Resources Information Center
Beyranevand, Matthew L.
2010-01-01
Although it is difficult to find any current literature that does not encourage use of multiple representations in mathematics classrooms, there has been very limited research that compared such practice to student achievement level on standardized tests. This study examined the associations between students' achievement levels and their (a)…
O'Regan, Christopher; Ghement, Isabella; Eyawo, Oghenowede; Guyatt, Gordon H; Mills, Edward J
2009-01-01
Background Comparing the effectiveness of interventions is now a requirement for regulatory approval in several countries. It also aids in clinical and public health decision-making. However, in the absence of head-to-head randomized trials (RCTs), determining the relative effectiveness of interventions is challenging. Several methodological options are now available. We aimed to determine the comparative validity of the adjusted indirect comparisons of RCTs with the mixed treatment comparison approach. Methods Using systematic searching, we identified all meta-analyses evaluating more than 3 interventions for a similar disease state with binary outcomes. We abstracted data on each clinical trial including population n and outcomes. We conducted fixed effects meta-analysis of each intervention versus mutual comparator and then applied the adjusted indirect comparison. We conducted a mixed treatment meta-analysis on all trials and compared the point estimates and 95% confidence/credible intervals (CIs/CrIs) to determine important differences. Results We included data from 7 reviews that met our inclusion criteria, allowing a total of 51 comparisons. According to the a priori consistency rule, we found 2 examples where the analytic comparisons were statistically significant using the mixed treatment comparison over the adjusted indirect comparisons and 1 example where this was vice versa. We found 6 examples where the direction of effect differed according to the indirect comparison method chosen and we found 9 examples where the confidence intervals were importantly different between approaches. Conclusion In most analyses, the adjusted indirect comparison yields estimates of relative effectiveness equal to the mixed treatment comparison. In less complex indirect comparisons, where all studies share a mutual comparator, both approaches yield similar benefits. As comparisons become more complex, the mixed treatment comparison may be favoured. PMID:19772573
NASA Astrophysics Data System (ADS)
Eghnam, Karam M.; Sheta, Alaa F.
2008-06-01
Development of accurate models is necessary in critical applications such as prediction. In this paper, a solution to the stock prediction problem of the Barents Sea capelin is introduced using Artificial Neural Network (ANN) and Multiple Linear model Regression (MLR) models. The Capelin stock in the Barents Sea is one of the largest in the world. It normally maintained a fishery with annual catches of up to 3 million tons. The Capelin stock problem has an impact in the fish stock development. The proposed prediction model was developed using an ANNs with their weights adapted using Genetic Algorithm (GA). The proposed model was compared to traditional linear model the MLR. The results showed that the ANN-GA model produced an overall accuracy of 21% better than the MLR model.
NASA Technical Reports Server (NTRS)
Barrett, C. A.
1985-01-01
Multiple linear regression analysis was used to determine an equation for estimating hot corrosion attack for a series of Ni base cast turbine alloys. The U transform (i.e., 1/sin (% A/100) to the 1/2) was shown to give the best estimate of the dependent variable, y. A complete second degree equation is described for the centered" weight chemistries for the elements Cr, Al, Ti, Mo, W, Cb, Ta, and Co. In addition linear terms for the minor elements C, B, and Zr were added for a basic 47 term equation. The best reduced equation was determined by the stepwise selection method with essentially 13 terms. The Cr term was found to be the most important accounting for 60 percent of the explained variability hot corrosion attack.
Wang, Shuo; Cao, Yang
2015-01-01
Random effect in cellular systems is an important topic in systems biology and often simulated with Gillespie’s stochastic simulation algorithm (SSA). Abridgment refers to model reduction that approximates a group of reactions by a smaller group with fewer species and reactions. This paper presents a theoretical analysis, based on comparison of the first exit time, for the abridgment on a linear chain reaction model motivated by systems with multiple phosphorylation sites. The analysis shows that if the relaxation time of the fast subsystem is much smaller than the mean firing time of the slow reactions, the abridgment can be applied with little error. This analysis is further verified with numerical experiments for models of bistable switch and oscillations in which linear chain system plays a critical role. PMID:26263559
About the multiple linear regressions applied in studying the solvatochromic effects.
Dorohoi, Dana-Ortansa
2010-03-01
Statistical analysis is applied to study the solvatochromic effects using the solvent parameters (regressors) influencing the spectral shifts in the electronic spectra. The data pointed to eliminate the non-significant parameters and the aberrant points (for which supplemental interactions were neglected in used theories) from those supposed to multi-linear regression. A BASIC program permits to follow these desiderates step by step. In order to exemplify the steps of regression, the wavenumbers of the maximum pi-pi* absorption band of three benzene derivatives in various solvents were used. PMID:20089443
NASA Technical Reports Server (NTRS)
Robinson, D. M.; Fales, C. L., Jr.; Skolaut, M. W., Jr.
1985-01-01
An estimate of the wavelength accuracy of a laser wavemeter is performed for a system consisting of a multiple-beam Fizeau interferometer and a linear photosensor array readout. The analysis consists of determining the fringe position errors which result when various noise sources are included in the fringe forming and detection process. Two methods of estimating the fringe centers are considered: (1) maximum pixel current location, and (2) average pixel location for two detectors with nearly equal output currents. Wavelength error results for these two methods are compared for some typical wavemeter parameters.
Brenner, Meredith H.; Cai, Dawen; Swanson, Joel A.; Ogilvie, Jennifer P.
2013-01-01
Imaging multiple fluorescent proteins (FPs) by two-photon microscopy has numerous applications for studying biological processes in thick and live samples. Here we demonstrate a setup utilizing a single broadband laser and a phase-only pulse-shaper to achieve imaging of three FPs (mAmetrine, TagRFPt, and mKate2) in live mammalian cells. Phase-shaping to achieve selective excitation of the FPs in combination with post-imaging linear unmixing enables clean separation of the fluorescence signal of each FP. This setup also benefits from low overall cost and simple optical alignment, enabling easy adaptation in a regular biomedical research laboratory. PMID:23938572
Czekaj, L.; Horodecki, P.; Korbicz, J. K.; Chhajlany, R. W.
2010-08-15
Superadditivity effects of communication capacities are known in the case of discrete variable quantum channels. We describe the continuous variable analog of one of these effects in the framework of Gaussian multiple access channels (MACs). Classically, superadditivity-type effects are strongly restricted: For example, adding resources to one sender is never advantageous to other senders in sending their respective information to the receiver. We show that this rule can be surpassed using quantum resources, giving rise to a type of truly quantum superadditivity. This is illustrated here for two examples of experimentally feasible Gaussian MACs.
A Versatile Multiple Target Detection System Based on DNA Nano-assembled Linear FRET Arrays
NASA Astrophysics Data System (ADS)
Li, Yansheng; Du, Hongwu; Wang, Wenqian; Zhang, Peixun; Xu, Liping; Wen, Yongqiang; Zhang, Xueji
2016-05-01
DNA molecules have been utilized both as powerful synthetic building blocks to create nanoscale architectures and as inconstant programmable templates for assembly of biosensors. In this paper, a versatile, scalable and multiplex detection system is reported based on an extending fluorescent resonance energy transfer (FRET) cascades on a linear DNA assemblies. Seven combinations of three kinds of targets are successfully detected through the changes of fluorescence spectra because of the three-steps FRET or non-FRET continuity mechanisms. This nano-assembled FRET-based nanowire is extremely significant for the development of rapid, simple and sensitive detection system. The method used here could be extended to a general platform for multiplex detection through more-step FRET process.
A Versatile Multiple Target Detection System Based on DNA Nano-assembled Linear FRET Arrays
Li, Yansheng; Du, Hongwu; Wang, Wenqian; Zhang, Peixun; Xu, Liping; Wen, Yongqiang; Zhang, Xueji
2016-01-01
DNA molecules have been utilized both as powerful synthetic building blocks to create nanoscale architectures and as inconstant programmable templates for assembly of biosensors. In this paper, a versatile, scalable and multiplex detection system is reported based on an extending fluorescent resonance energy transfer (FRET) cascades on a linear DNA assemblies. Seven combinations of three kinds of targets are successfully detected through the changes of fluorescence spectra because of the three-steps FRET or non-FRET continuity mechanisms. This nano-assembled FRET-based nanowire is extremely significant for the development of rapid, simple and sensitive detection system. The method used here could be extended to a general platform for multiplex detection through more-step FRET process. PMID:27230484
A Versatile Multiple Target Detection System Based on DNA Nano-assembled Linear FRET Arrays.
Li, Yansheng; Du, Hongwu; Wang, Wenqian; Zhang, Peixun; Xu, Liping; Wen, Yongqiang; Zhang, Xueji
2016-01-01
DNA molecules have been utilized both as powerful synthetic building blocks to create nanoscale architectures and as inconstant programmable templates for assembly of biosensors. In this paper, a versatile, scalable and multiplex detection system is reported based on an extending fluorescent resonance energy transfer (FRET) cascades on a linear DNA assemblies. Seven combinations of three kinds of targets are successfully detected through the changes of fluorescence spectra because of the three-steps FRET or non-FRET continuity mechanisms. This nano-assembled FRET-based nanowire is extremely significant for the development of rapid, simple and sensitive detection system. The method used here could be extended to a general platform for multiplex detection through more-step FRET process. PMID:27230484
High resolution, multiple-energy linear sweep detector for x-ray imaging
Perez-Mendez, Victor; Goodman, Claude A.
1996-01-01
Apparatus for generating plural electrical signals in a single scan in response to incident X-rays received from an object. Each electrical signal represents an image of the object at a different range of energies of the incident X-rays. The apparatus comprises a first X-ray detector, a second X-ray detector stacked upstream of the first X-ray detector, and an X-ray absorber stacked upstream of the first X-ray detector. The X-ray absorber provides an energy-dependent absorption of the incident X-rays before they are incident at the first X-ray detector, but provides no absorption of the incident X-rays before they are incident at the second X-ray detector. The first X-ray detector includes a linear array of first pixels, each of which produces an electrical output in response to the incident X-rays in a first range of energies. The first X-ray detector also includes a circuit that generates a first electrical signal in response to the electrical output of each of the first pixels. The second X-ray detector includes a linear array of second pixels, each of which produces an electrical output in response to the incident X-rays in a second range of energies, broader than the first range of energies. The second X-ray detector also includes a circuit that generates a second electrical signal in response to the electrical output of each of the second pixels.
High resolution, multiple-energy linear sweep detector for x-ray imaging
Perez-Mendez, V.; Goodman, C.A.
1996-08-20
Apparatus is disclosed for generating plural electrical signals in a single scan in response to incident X-rays received from an object. Each electrical signal represents an image of the object at a different range of energies of the incident X-rays. The apparatus comprises a first X-ray detector, a second X-ray detector stacked upstream of the first X-ray detector, and an X-ray absorber stacked upstream of the first X-ray detector. The X-ray absorber provides an energy-dependent absorption of the incident X-rays before they are incident at the first X-ray detector, but provides no absorption of the incident X-rays before they are incident at the second X-ray detector. The first X-ray detector includes a linear array of first pixels, each of which produces an electrical output in response to the incident X-rays in a first range of energies. The first X-ray detector also includes a circuit that generates a first electrical signal in response to the electrical output of each of the first pixels. The second X-ray detector includes a linear array of second pixels, each of which produces an electrical output in response to the incident X-rays in a second range of energies, broader than the first range of energies. The second X-ray detector also includes a circuit that generates a second electrical signal in response to the electrical output of each of the second pixels. 12 figs.
NASA Astrophysics Data System (ADS)
Joshi, Deepti; St-Hilaire, André; Daigle, Anik; Ouarda, Taha B. M. J.
2013-04-01
SummaryThis study attempts to compare the performance of two statistical downscaling frameworks in downscaling hydrological indices (descriptive statistics) characterizing the low flow regimes of three rivers in Eastern Canada - Moisie, Romaine and Ouelle. The statistical models selected are Relevance Vector Machine (RVM), an implementation of Sparse Bayesian Learning, and the Automated Statistical Downscaling tool (ASD), an implementation of Multiple Linear Regression. Inputs to both frameworks involve climate variables significantly (α = 0.05) correlated with the indices. These variables were processed using Canonical Correlation Analysis and the resulting canonical variates scores were used as input to RVM to estimate the selected low flow indices. In ASD, the significantly correlated climate variables were subjected to backward stepwise predictor selection and the selected predictors were subsequently used to estimate the selected low flow indices using Multiple Linear Regression. With respect to the correlation between climate variables and the selected low flow indices, it was observed that all indices are influenced, primarily, by wind components (Vertical, Zonal and Meridonal) and humidity variables (Specific and Relative Humidity). The downscaling performance of the framework involving RVM was found to be better than ASD in terms of Relative Root Mean Square Error, Relative Mean Absolute Bias and Coefficient of Determination. In all cases, the former resulted in less variability of the performance indices between calibration and validation sets, implying better generalization ability than for the latter.
Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas
2014-08-01
The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.
Performance of MBE-4: An experimental multiple beam induction linear accelerator for heavy ions
Warwick, A.I.; Fessenden, T.J.; Keefe, D.; Kim, C.H.; Meuth, H.
1988-06-01
An experimental induction linac, called MBE-4, has been constructed to demonstrate acceleration and current amplification of multiple heavy ion beams. This work is part of a program to study the use of such an accelerator as a driver for heavy ion inertial fusion. MBE-4 is 16m long and accelerates four space-charge-dominated beams of singly-charged cesium ions, in this case from 200 keV to 700 keV, amplifying the current in each beam from 10mA by a factor of nine. Construction of the experiment was completed late in 1987 and we present the results of detailed measurements of the longitudinal beam dynamics. Of particular interest is the contribution of acceleration errors to the growth of current fluctuations and to the longitudinal emittance. The effectiveness of the longitudinal focusing, accomplished by means of the controlled time dependence of the accelerating fields, is also discussed. 4 refs., 5 figs., 1 tab.
ERIC Educational Resources Information Center
Thatcher, Greg W.; Henson, Robin K.
This study examined research in training and development to determine effect size reporting practices. It focused on the reporting of corrected effect sizes in research articles using multiple regression analyses. When possible, researchers calculated corrected effect sizes and determine if the associated shrinkage could have impacted researcher…
Jahandideh, Sepideh Jahandideh, Samad; Asadabadi, Ebrahim Barzegari; Askarian, Mehrdad; Movahedi, Mohammad Mehdi; Hosseini, Somayyeh; Jahandideh, Mina
2009-11-15
Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R{sup 2} were used to evaluate performance of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R{sup 2} confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.
Hargrove, Levi J; Scheme, Erik J; Englehart, Kevin B; Hudgins, Bernard S
2010-02-01
This paper describes a novel pattern recognition based myoelectric control system that uses parallel binary classification and class specific thresholds. The system was designed with an intuitive configuration interface, similar to existing conventional myoelectric control systems. The system was assessed quantitatively with a classification error metric and functionally with a clothespin test implemented in a virtual environment. For each case, the proposed system was compared to a state-of-the-art pattern recognition system based on linear discriminant analysis and a conventional myoelectric control scheme with mode switching. These assessments showed that the proposed control system had a higher classification error ( p < 0.001) but yielded a more controllable myoelectric control system ( p < 0.001) as measured through a clothespin usability test implemented in a virtual environment. Furthermore, the system was computationally simple and applicable for real-time embedded implementation. This work provides the basis for a clinically viable pattern recognition based myoelectric control system which is robust, easily configured, and highly usable. PMID:20071277
Rousselot, J M; Peslin, R; Duvivier, C
1992-07-01
A potentially useful method to monitor respiratory mechanics in artificially ventilated patients consists of analyzing the relationship between tracheal pressure (P), lung volume (V), and gas flow (V) by multiple linear regression (MLR) using a suitable model. Contrary to other methods, it does not require any particular flow waveform and, therefore, may be used with any ventilator. This approach was evaluated in three neonates and seven young children admitted into an intensive care unit for respiratory disorders of various etiologies. P and V were measured and digitized at a sampling rate of 40 Hz for periods of 20-48 s. After correction of P for the non-linear resistance of the endotracheal tube, the data were first analyzed with the usual linear monoalveolar model: P = PO + E.V + R.V where E and R are total respiratory elastance and resistance, and PO is the static recoil pressure at end-expiration. A good fit of the model to the data was seen in five of ten children. PO, E, and R were reproducible within cycles, and consistent with the patient's age and condition; the data obtained with two ventilatory modes were highly correlated. In the five instances in which the simple model did not fit the data well, they were reanalyzed with more sophisticated models allowing for mechanical non-homogeneity or for non-linearity of R or E. While several models substantially improved the fit, physiologically meaningful results were only obtained when R was allowed to change with lung volume. We conclude that the MLR method is adequate to monitor respiratory mechanics, even when the usual model is inadequate. PMID:1437330
The BL-QMR algorithm for non-Hermitian linear systems with multiple right-hand sides
Freund, R.W.
1996-12-31
Many applications require the solution of multiple linear systems that have the same coefficient matrix, but differ in their right-hand sides. Instead of applying an iterative method to each of these systems individually, it is potentially much more efficient to employ a block version of the method that generates iterates for all the systems simultaneously. However, it is quite intricate to develop robust and efficient block iterative methods. In particular, a key issue in the design of block iterative methods is the need for deflation. The iterates for the different systems that are produced by a block method will, in general, converge at different stages of the block iteration. An efficient and robust block method needs to be able to detect and then deflate converged systems. Each such deflation reduces the block size, and thus the block method needs to be able to handle varying block sizes. For block Krylov-subspace methods, deflation is also crucial in order to delete linearly and almost linearly dependent vectors in the underlying block Krylov sequences. An added difficulty arises for Lanczos-type block methods for non-Hermitian systems, since they involve two different block Krylov sequences. In these methods, deflation can now occur independently in both sequences, and consequently, the block sizes in the two sequences may become different in the course of the iteration, even though they were identical at the beginning. We present a block version of Freund and Nachtigal`s quasi-minimal residual method for the solution of non-Hermitian linear systems with single right-hand sides.
Dubose, F.
2012-02-21
In nuclear material processing facilities, it is often necessary to balance the competing demands of accuracy and throughput. While passive neutron multiplicity counting is the preferred method for relatively fast assays of plutonium, the presence of low-Z impurities (fluorine, beryllium, etc.) rapidly erodes the assay precision of passive neutron counting techniques, frequently resulting in unacceptably large total measurement uncertainties. Conversely, while calorimeters are immune to these impurity effects, the long count times required for high accuracy can be a hindrance to efficiency. The higher uncertainties in passive neutron measurements of impure material are driven by the resulting large (>>2) {alpha}-values, defined as the ({alpha},n):spontaneous fission neutron emission ratio. To counter impurity impacts for high-{alpha} materials, a known-{alpha} approach may be adopted. In this method, {alpha} is determined for a single item using a combination of gamma-ray and calorimetric measurements. Because calorimetry is based on heat output, rather than a statistical distribution of emitted neutrons, an {alpha}-value determined in this way is far more accurate than one determined from passive neutron counts. This fixed {alpha} value can be used in conventional multiplicity analysis for any plutonium-bearing item having the same chemical composition and isotopic distribution as the original. With the results of single calorimeter/passive neutron/gamma-ray measurement, these subsequent items can then be assayed with high precision and accuracy in a relatively short time, despite the presence of impurities. A calorimeter-based known-{alpha} multiplicity analysis technique is especially useful when requiring rapid, high accuracy, high precision measurements of multiple plutonium bearing items having a common source. The technique has therefore found numerous applications at the Savannah River Site. In each case, a plutonium (or mixed U/Pu) bearing item is divided
NASA Astrophysics Data System (ADS)
Soares dos Santos, T.; Mendes, D.; Rodrigues Torres, R.
2016-01-01
Several studies have been devoted to dynamic and statistical downscaling for analysis of both climate variability and climate change. This paper introduces an application of artificial neural networks (ANNs) and multiple linear regression (MLR) by principal components to estimate rainfall in South America. This method is proposed for downscaling monthly precipitation time series over South America for three regions: the Amazon; northeastern Brazil; and the La Plata Basin, which is one of the regions of the planet that will be most affected by the climate change projected for the end of the 21st century. The downscaling models were developed and validated using CMIP5 model output and observed monthly precipitation. We used general circulation model (GCM) experiments for the 20th century (RCP historical; 1970-1999) and two scenarios (RCP 2.6 and 8.5; 2070-2100). The model test results indicate that the ANNs significantly outperform the MLR downscaling of monthly precipitation variability.
NASA Astrophysics Data System (ADS)
dos Santos, T. S.; Mendes, D.; Torres, R. R.
2015-08-01
Several studies have been devoted to dynamic and statistical downscaling for analysis of both climate variability and climate change. This paper introduces an application of artificial neural networks (ANN) and multiple linear regression (MLR) by principal components to estimate rainfall in South America. This method is proposed for downscaling monthly precipitation time series over South America for three regions: the Amazon, Northeastern Brazil and the La Plata Basin, which is one of the regions of the planet that will be most affected by the climate change projected for the end of the 21st century. The downscaling models were developed and validated using CMIP5 model out- put and observed monthly precipitation. We used GCMs experiments for the 20th century (RCP Historical; 1970-1999) and two scenarios (RCP 2.6 and 8.5; 2070-2100). The model test results indicate that the ANN significantly outperforms the MLR downscaling of monthly precipitation variability.
Soboyejo, W.O.; Soboyejo, A.B.O.; Ni, Y.; Mercer, C.
1997-12-31
In a recent paper, Mercer and Soboyejo demonstrated the Hall-Petch dependence of basic room- and elevated-temperature (815 C) mechanical properties (0.2% offset strength, ultimate tensile strength, plastic elongation to failure and fracture toughness) on the average equiaxed/lamellar grain size. Simple Hall-Petch behavior was shown to occur in a wide range of extruded duplex {alpha}{sub 2}+{gamma} alloys (Ti-48Al, Ti-48Al-1.4Mn Ti-48Al-2Mn and Ti-48Al-1.5Cr). As in steels and other materials, simple Hall-Petch equations were derived for the above properties. However, the Hall-Petch equations did not include the effect of other variables that can affect the basic mechanical properties of gamma alloys. Multiple linear regression equations for the prediction of the combined effects of several (alloying, microstructure and temperature) variables on basic mechanical properties temperature are presented in this paper.
Yu, Donghai; Du, Ruobing; Xiao, Ji-Chang
2016-07-01
Ninety-six acidic phosphorus-containing molecules with pKa 1.88 to 6.26 were collected and divided into training and test sets by random sampling. Structural parameters were obtained by density functional theory calculation of the molecules. The relationship between the experimental pKa values and structural parameters was obtained by multiple linear regression fitting for the training set, and tested with the test set; the R(2) values were 0.974 and 0.966 for the training and test sets, respectively. This regression equation, which quantitatively describes the influence of structural parameters on pKa , and can be used to predict pKa values of similar structures, is significant for the design of new acidic phosphorus-containing extractants. © 2016 Wiley Periodicals, Inc. PMID:27218266
Rafiei, Hamid; Khanzadeh, Marziyeh; Mozaffari, Shahla; Bostanifar, Mohammad Hassan; Avval, Zhila Mohajeri; Aalizadeh, Reza; Pourbasheer, Eslam
2016-01-01
Quantitative structure-activity relationship (QSAR) study has been employed for predicting the inhibitory activities of the Hepatitis C virus (HCV) NS5B polymerase inhibitors. A data set consisted of 72 compounds was selected, and then different types of molecular descriptors were calculated. The whole data set was split into a training set (80 % of the dataset) and a test set (20 % of the dataset) using principle component analysis. The stepwise (SW) and the genetic algorithm (GA) techniques were used as variable selection tools. Multiple linear regression method was then used to linearly correlate the selected descriptors with inhibitory activities. Several validation technique including leave-one-out and leave-group-out cross-validation, Y-randomization method were used to evaluate the internal capability of the derived models. The external prediction ability of the derived models was further analyzed using modified r2, concordance correlation coefficient values and Golbraikh and Tropsha acceptable model criteria's. Based on the derived results (GA-MLR), some new insights toward molecular structural requirements for obtaining better inhibitory activity were obtained. PMID:27065774
2012-01-01
Background Multiple imputation is often used for missing data. When a model contains as covariates more than one function of a variable, it is not obvious how best to impute missing values in these covariates. Consider a regression with outcome Y and covariates X and X2. In 'passive imputation' a value X* is imputed for X and then X2 is imputed as (X*)2. A recent proposal is to treat X2 as 'just another variable' (JAV) and impute X and X2 under multivariate normality. Methods We use simulation to investigate the performance of three methods that can easily be implemented in standard software: 1) linear regression of X on Y to impute X then passive imputation of X2; 2) the same regression but with predictive mean matching (PMM); and 3) JAV. We also investigate the performance of analogous methods when the analysis involves an interaction, and study the theoretical properties of JAV. The application of the methods when complete or incomplete confounders are also present is illustrated using data from the EPIC Study. Results JAV gives consistent estimation when the analysis is linear regression with a quadratic or interaction term and X is missing completely at random. When X is missing at random, JAV may be biased, but this bias is generally less than for passive imputation and PMM. Coverage for JAV was usually good when bias was small. However, in some scenarios with a more pronounced quadratic effect, bias was large and coverage poor. When the analysis was logistic regression, JAV's performance was sometimes very poor. PMM generally improved on passive imputation, in terms of bias and coverage, but did not eliminate the bias. Conclusions Given the current state of available software, JAV is the best of a set of imperfect imputation methods for linear regression with a quadratic or interaction effect, but should not be used for logistic regression. PMID:22489953
NASA Astrophysics Data System (ADS)
Zhao, Yao-kun; Li, Bin; Zhang, Fan
2014-11-01
Sun sensor is a key device in satellite's attitude determination system. It acquires satellite's attitude information by measuring sun light direction. Compared with area array CMOS sun sensor, the linear CMOS sun sensor has the advantages of low power consumption, light weight and relatively simple algorithm. Considering the pixel number, power consumption and efficiency of output, most sun sensors equipped with a single photosensitive unit usually have (+/-60)x(+/-60) field of view(FOV). Satellites usually use multiple sun sensors for semi-sphere field of view in total to meet the need of attitude measurement in all directions. Considering the need of large-scale FOV measurement and high integration level, this paper proposes a semi-sphere FOV sun sensor, of which coverage area can be (+/-90)x(+/-90) . A prototype has been made and the calibration of key component has been conducted. By integrating four photosensitive units, the semi-sphere FOV sun sensor is achieved, as a result, the demand of high integration can be realized for a micro-satellite device. The photosensitive unit consists of an N-shape slit mask and a linear CMOS image sensor. An N-shape slit model is established to acquire biaxial sun angles from analyzing the shift of 3 peak values from the image of the linear sensor. Embedded system has been designed and developed, in which the MCU control four photosensitive units. Calibration of one photosensitive unit, which is the key step in the process of the whole calibration of semi-sphere FOV sun sensor, has been conducted. As a result of the symmetry of N-shape slit, initial position of the linear image sensor can be fixed. Due to the installation error and machining deviation, centroid algorithm and data gridding technique is adopted to improve the accuracy. Experiments show that the single photosensitive unit can reach an angle accuracy of 0.1625°. Consequently, from the point of significant component in the sun sensor, initial calibration ensures
Multiple linear regression analysis
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1980-01-01
Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.
Kokaly, R.F.; Clark, R.N.
1999-01-01
We develop a new method for estimating the biochemistry of plant material using spectroscopy. Normalized band depths calculated from the continuum-removed reflectance spectra of dried and ground leaves were used to estimate their concentrations of nitrogen, lignin, and cellulose. Stepwise multiple linear regression was used to select wavelengths in the broad absorption features centered at 1.73 ??m, 2.10 ??m, and 2.30 ??m that were highly correlated with the chemistry of samples from eastern U.S. forests. Band depths of absorption features at these wavelengths were found to also be highly correlated with the chemistry of four other sites. A subset of data from the eastern U.S. forest sites was used to derive linear equations that were applied to the remaining data to successfully estimate their nitrogen, lignin, and cellulose concentrations. Correlations were highest for nitrogen (R2 from 0.75 to 0.94). The consistent results indicate the possibility of establishing a single equation capable of estimating the chemical concentrations in a wide variety of species from the reflectance spectra of dried leaves. The extension of this method to remote sensing was investigated. The effects of leaf water content, sensor signal-to-noise and bandpass, atmospheric effects, and background soil exposure were examined. Leaf water was found to be the greatest challenge to extending this empirical method to the analysis of fresh whole leaves and complete vegetation canopies. The influence of leaf water on reflectance spectra must be removed to within 10%. Other effects were reduced by continuum removal and normalization of band depths. If the effects of leaf water can be compensated for, it might be possible to extend this method to remote sensing data acquired by imaging spectrometers to give estimates of nitrogen, lignin, and cellulose concentrations over large areas for use in ecosystem studies.We develop a new method for estimating the biochemistry of plant material using
NASA Astrophysics Data System (ADS)
Lee, C. Y.; Tippett, M. K.; Sobel, A. H.; Camargo, S. J.
2014-12-01
We are working towards the development of a new statistical-dynamical downscaling system to study the influence of climate on tropical cyclones (TCs). The first step is development of an appropriate model for TC intensity as a function of environmental variables. We approach this issue with a stochastic model consisting of a multiple linear regression model (MLR) for 12-hour intensity forecasts as a deterministic component, and a random error generator as a stochastic component. Similar to the operational Statistical Hurricane Intensity Prediction Scheme (SHIPS), MLR relates the surrounding environment to storm intensity, but with only essential predictors calculated from monthly-mean NCEP reanalysis fields (potential intensity, shear, etc.) and from persistence. The deterministic MLR is developed with data from 1981-1999 and tested with data from 2000-2012 for the Atlantic, Eastern North Pacific, Western North Pacific, Indian Ocean, and Southern Hemisphere basins. While the global MLR's skill is comparable to that of the operational statistical models (e.g., SHIPS), the distribution of the predicted maximum intensity from deterministic results has a systematic low bias compared to observations; the deterministic MLR creates almost no storms with intensities greater than 100 kt. The deterministic MLR can be significantly improved by adding the stochastic component, based on the distribution of random forecasting errors from the deterministic model compared to the training data. This stochastic component may be thought of as representing the component of TC intensification that is not linearly related to the environmental variables. We find that in order for the stochastic model to accurately capture the observed distribution of maximum storm intensities, the stochastic component must be auto-correlated across 12-hour time steps. This presentation also includes a detailed discussion of the distributions of other TC-intensity related quantities, as well as the inter
Martin, L; Mezcua, M; Ferrer, C; Gil Garcia, M D; Malato, O; Fernandez-Alba, A R
2013-01-01
The main objective of this work was to establish a mathematical function that correlates pesticide residue levels in apple juice with the levels of the pesticides applied on the raw fruit, taking into account some of their physicochemical properties such as water solubility, the octanol/water partition coefficient, the organic carbon partition coefficient, vapour pressure and density. A mixture of 12 pesticides was applied to an apple tree; apples were collected after 10 days of application. After harvest, apples were treated with a mixture of three post-harvest pesticides and the fruits were then processed in order to obtain apple juice following a routine industrial process. The pesticide residue levels in the apple samples were analysed using two multi-residue methods based on LC-MS/MS and GC-MS/MS. The concentration of pesticides was determined in samples derived from the different steps of processing. The processing factors (the coefficient between residue level in the processed commodity and the residue level in the commodity to be processed) obtained for the full juicing process were found to vary among the different pesticides studied. In order to investigate the relationships between the levels of pesticide residue found in apple juice samples and their physicochemical properties, principal component analysis (PCA) was performed using two sets of samples (one of them using experimental data obtained in this work and the other including the data taken from the literature). In both cases the correlation was found between processing factors of pesticides in the apple juice and the negative logarithms (base 10) of the water solubility, octanol/water partition coefficient and organic carbon partition coefficient. The linear correlation between these physicochemical properties and the processing factor were established using a multiple linear regression technique. PMID:23281800
NASA Astrophysics Data System (ADS)
Yan, Zhixiang; Lin, Ge; Ye, Yang; Wang, Yitao; Yan, Ru
2014-06-01
Flavonoids are one of the largest classes of plant secondary metabolites serving a variety of functions in plants and associating with a number of health benefits for humans. Typically, they are co-identified with many other secondary metabolites using untargeted metabolomics. The limited data quality of untargeted workflow calls for a shift from the breadth-first to the depth-first screening strategy when a specific biosynthetic pathway is focused on. Here we introduce a generic multiple reaction monitoring (MRM)-based approach for flavonoids profiling in plants using a hybrid triple quadrupole linear ion trap (QTrap) mass spectrometer. The approach includes four steps: (1) preliminary profiling of major aglycones by multiple ion monitoring triggered enhanced product ion scan (MIM-EPI); (2) glycones profiling by precursor ion triggered EPI scan (PI-EPI) of major aglycones; (3) comprehensive aglycones profiling by combining MIM-EPI and neutral loss triggered EPI scan (NL-EPI) of major glycone; (4) in-depth flavonoids profiling by MRM-EPI with elaborated MRM transitions. Particularly, incorporation of the NH3 loss and sugar elimination proved to be very informative and confirmative for flavonoids screening. This approach was applied for profiling flavonoids in Astragali radix ( Huangqi), a famous herb widely used for medicinal and nutritional purposes in China. In total, 421 flavonoids were tentatively characterized, among which less than 40 have been previously reported in this medicinal plant. This MRM-based approach provides versatility and sensitivity that required for flavonoids profiling in plants and serves as a useful tool for plant metabolomics.
Boulet, Sebastien; Boudot, Elsa; Houel, Nicolas
2016-05-01
Back pain is a common reason for consultation in primary healthcare clinical practice, and has effects on daily activities and posture. Relationships between the whole spine and upright posture, however, remain unknown. The aim of this study was to identify the relationship between each spinal curve and centre of pressure position as well as velocity for healthy subjects. Twenty-one male subjects performed quiet stance in natural position. Each upright posture was then recorded using an optoelectronics system (Vicon Nexus) synchronized with two force plates. At each moment, polynomial interpolations of markers attached on the spine segment were used to compute cervical lordosis, thoracic kyphosis and lumbar lordosis angle curves. Mean of centre of pressure position and velocity was then computed. Multiple stepwise linear regression analysis showed that the position and velocity of centre of pressure associated with each part of the spinal curves were defined as best predictors of the lumbar lordosis angle (R(2)=0.45; p=1.65*10-10) and the thoracic kyphosis angle (R(2)=0.54; p=4.89*10-13) of healthy subjects in quiet stance. This study showed the relationships between each of cervical, thoracic, lumbar curvatures, and centre of pressure's fluctuation during free quiet standing using non-invasive full spinal curve exploration. PMID:26970888
Schilling, K.E.; Wolter, C.F.
2005-01-01
Nineteen variables, including precipitation, soils and geology, land use, and basin morphologic characteristics, were evaluated to develop Iowa regression models to predict total streamflow (Q), base flow (Qb), storm flow (Qs) and base flow percentage (%Qb) in gauged and ungauged watersheds in the state. Discharge records from a set of 33 watersheds across the state for the 1980 to 2000 period were separated into Qb and Qs. Multiple linear regression found that 75.5 percent of long term average Q was explained by rainfall, sand content, and row crop percentage variables, whereas 88.5 percent of Qb was explained by these three variables plus permeability and floodplain area variables. Qs was explained by average rainfall and %Qb was a function of row crop percentage, permeability, and basin slope variables. Regional regression models developed for long term average Q and Qb were adapted to annual rainfall and showed good correlation between measured and predicted values. Combining the regression model for Q with an estimate of mean annual nitrate concentration, a map of potential nitrate loads in the state was produced. Results from this study have important implications for understanding geomorphic and land use controls on streamflow and base flow in Iowa watersheds and similar agriculture dominated watersheds in the glaciated Midwest. (JAWRA) (Copyright ?? 2005).
Flentie, Kelly N; Stallings, Christina L; Turk, John; Minnaard, Adriaan J; Hsu, Fong-Fu
2016-01-01
Both phthiocerol/phthiodiolone dimycocerosate (PDIM) and phenolic glycolipids are abundant virulent lipids in the cell wall of various pathogenic mycobacteria, which can synthesize a wide range of complex high-molecular-mass lipids. In this article, we describe linear ion-trap MS(n) mass spectrometric approach for structural study of PDIMs, which were desorbed as the [M + Li](+) and [M + NH(4)](+) ions by ESI. We also applied charge-switch strategy to convert the mycocerosic acid substituents to their N-(4-aminomethylphenyl) pyridinium (AMPP) derivatives and analyzed them as M (+) ions, following alkaline hydrolysis of the PDIM to release mycocerosic acids. The structural information from MS(n) on the [M + Li](+) and [M + NH(4)](+) molecular species and on the M (+) ions of the mycocerosic acid-AMPP derivative affords realization of the complex structures of PDIMs in Mycobacterium tuberculosis biofilm, differentiation of phthiocerol and phthiodiolone lipid families and complete structure identification, including the phthiocerol and phthiodiolone backbones, and the mycocerosic acid substituents, including the locations of their multiple methyl side chains, can be achieved. PMID:26574042
Zimmer, Christoph; Sahle, Sven
2015-10-01
Estimating model parameters from experimental data is a crucial technique for working with computational models in systems biology. Since stochastic models are increasingly important, parameter estimation methods for stochastic modelling are also of increasing interest. This study presents an extension to the 'multiple shooting for stochastic systems (MSS)' method for parameter estimation. The transition probabilities of the likelihood function are approximated with normal distributions. Means and variances are calculated with a linear noise approximation on the interval between succeeding measurements. The fact that the system is only approximated on intervals which are short in comparison with the total observation horizon allows to deal with effects of the intrinsic stochasticity. The study presents scenarios in which the extension is essential for successfully estimating the parameters and scenarios in which the extension is of modest benefit. Furthermore, it compares the estimation results with reversible jump techniques showing that the approximation does not lead to a loss of accuracy. Since the method is not based on stochastic simulations or approximative sampling of distributions, its computational speed is comparable with conventional least-squares parameter estimation methods. PMID:26405142
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666
Sage, Cindy
2015-01-01
The 'informational content' of Earth's electromagnetic signaling is like a set of operating instructions for human life. These environmental cues are dynamic and involve exquisitely low inputs (intensities) of critical frequencies with which all life on Earth evolved. Circadian and other temporal biological rhythms depend on these fluctuating electromagnetic inputs to direct gene expression, cell communication and metabolism, neural development, brainwave activity, neural synchrony, a diversity of immune functions, sleep and wake cycles, behavior and cognition. Oscillation is also a universal phenomenon, and biological systems of the heart, brain and gut are dependent on the cooperative actions of cells that function according to principles of non-linear, coupled biological oscillations for their synchrony. They are dependent on exquisitely timed cues from the environment at vanishingly small levels. Altered 'informational content' of environmental cues can swamp natural electromagnetic cues and result in dysregulation of normal biological rhythms that direct growth, development, metabolism and repair mechanisms. Pulsed electromagnetic fields (PEMF) and radiofrequency radiation (RFR) can have the devastating biological effects of disrupting homeostasis and desynchronizing normal biological rhythms that maintain health. Non-linear, weak field biological oscillations govern body electrophysiology, organize cell and tissue functions and maintain organ systems. Artificial bioelectrical interference can give false information (disruptive signaling) sufficient to affect critical pacemaker cells (of the heart, gut and brain) and desynchronize functions of these important cells that orchestrate function and maintain health. Chronic physiological stress undermines homeostasis whether it is chemically induced or electromagnetically induced (or both exposures are simultaneous contributors). This can eventually break down adaptive biological responses critical to health
Tsuruta, S; Misztal, I; Aguilar, I; Lawlor, T J
2011-08-01
Currently, the USDA uses a single-trait (ST) model with several intermediate steps to obtain genomic evaluations for US Holsteins. In this study, genomic evaluations for 18 linear type traits were obtained with a multiple-trait (MT) model using a unified single-step procedure. The phenotypic type data on up to 18 traits were available for 4,813,726 Holsteins, and single nucleotide polymorphism markers from the Illumina BovineSNP50 genotyping Beadchip (Illumina Inc., San Diego, CA) were available on 17,293 bulls. Genomic predictions were computed with several genomic relationship matrices (G) that assumed different allele frequencies: equal, base, current, and current scaled. Computations were carried out with ST and MT models. Procedures were compared by coefficients of determination (R(2)) and regression of 2004 prediction of bulls with no daughters in 2004 on daughter deviations of those bulls in 2009. Predictions for 2004 also included parent averages without the use of genomic information. The R(2) for parent averages ranged from 10 to 34% for ST models and from 12 to 35% for MT models. The average R(2) for all G were 34 and 37% for ST and MT models, respectively. All of the regression coefficients were <1.0, indicating that estimated breeding values in 2009 of 1,307 genotyped young bulls' parents tended to be biased. The average regression coefficients ranged from 0.74 to 0.79 and from 0.75 to 0.80 for ST and MT models, respectively. When the weight for the inverse of the numerator relationship matrix (A(-1)) for genotyped animals was reduced from 1 to 0.7, R(2) remained almost identical while the regression coefficients increased by 0.11-0.26 and 0.12-0.23 for ST and MT models, respectively. The ST models required about 5s per iteration, whereas MT models required 3 (6) min per iteration for the regular (genomic) model. The MT single-step approach is feasible for 18 linear type traits in US Holstein cattle. Accuracy for genomic evaluation increases when
Sharma, P; Titus, A H; Qu, B; Huang, Y; Wang, W; Kuhls-Gilcrist, A; Cartwright, A N; Bednarek, D R; Rudin, S
2010-01-01
We describe a custom multiple-module multiplexer integrated circuit (MMMIC) that enables the combination of discrete Electron multiplying charge coupled devices (EMCCD) based imaging modules to improve medical imaging systems. It is highly desirable to have flexible imaging systems that provide high spatial resolution over a specific region of interest (ROI) and a field of view (FOV) large enough to encompass areas of clinical interest. Also, such systems should be dynamic, i.e. should be able to maintain a specified acquisition bandwidth irrespective of the size of the imaged FOV. The MMMIC achieves these goals by 1) multiplexing the outputs of an array of imaging modules to enable a larger FOV, 2) enabling a number of binning modes for adjustable high spatial resolution, and 3) enabling selection of a subset of modules in the array to achieve ROI imaging at a predetermined display bandwidth. The MMMIC design also allows multiple MMMICs to be connected to control larger arrays. The prototype MMMIC was designed and fabricated in the ON-SEMI 0.5μm CMOS process through MOSIS (www.mosis.org). It has three 12-bit inputs, a single 12-bit output, three input enable bits, and one output enable, so that one MMMIC can control the output from three discrete imager arrays. The modular design of the MMMIC enables four identical chips, connected in a two-stage sequential arrangement, to readout a 3×3 collection of individual imaging modules. The first stage comprises three MMMICs (each connected to three of the individual imaging module), and the second stage is a single MMMIC whose 12-bit output is then sent via a CameraLink interface to the system computer. The prototype MMMIC was successfully tested using digital outputs from two EMCCD-based detectors to be used in an x-ray imaging array detector system.Finally, we show how the MMMIC can be used to extend an imaging system to include any arbitrary (M×N) array of imaging modules enabling a large FOV along with ROI imaging
NASA Astrophysics Data System (ADS)
Wendt, L.; Gross, C.; McGuire, P. C.; Combe, J.-P.; Neukum, G.
2009-04-01
Juventae Chasma, just north of Valles Marineris on Mars, contains several light-toned deposits (LTD), one of which is labelled mound B. Based on IR data from the imaging spectrometer OMEGA on Mars Express,[1] suggested kieserite for the lower part and gypsum for the upper part of the mound. In this study, we analyzed NIR data from the Compact Reconnaissance Imaging Spectrometer CRISM on MRO with the Multiple-Endmember Linear Spectral Unmixing Model MELSUM developed by Combe et al.[2]. We used CRISM data product FRT00009C0A from 1 to 2.6 µm. A novel, time-dependent volcano-scan technique [3] was applied to remove absorption bands related to CO2 much more effectively than the volcano-scan technique [4] that has been applied to CRISM and OMEGA data so far. In the classic SMA, a solution for the measured spectrum is calculated by a linear combination of all input spectra (which may come from a spectral library or from the image itself) at once. This can lead to negative coefficients, which have no physical meaning. MELSUM avoids this by calculating a solution for each possible combination of a subset of the reference spectra, with the maximum number of library spectra in the subset defined by the user. The solution with the lowest residual to the input spectrum is returned. We used MELSUM in a first step as similarity measure within the image by using averaged spectra from the image itself as input to MELSUM. This showed that three spectral units are enough to describe the variability in the data to first order: A lower, light-toned unit, an upper light-toned unit and a dark-toned unit. We then chose 34 laboratory spectra of sulfates, mafic minerals and iron oxides plus a spectrum for H2O ice as reference spectra for the unmixing of averaged spectra for each of these spectral regions. The best fit for the dark material was a combination of olivine, pyroxene and ice (present as cloud in the atmosphere and not on the surface). In agreement with [5], The lower unit was
Day, Stephanie; Tselios, Theodore; Androutsou, Maria-Eleni; Tapeinou, Anthi; Frilligou, Irene; Stojanovska, Lily; Matsoukas, John; Apostolopoulos, Vasso
2015-01-01
Multiple sclerosis (MS) is a serious autoimmune demyelinating disease leading to loss of neurological function. The design and synthesis of various altered peptide ligands of immunodominant epitopes of myelin proteins to alter the autoimmune response, is a promising therapeutic approach for MS. In this study, linear and cyclic peptide analogs based on the myelin basic protein 83–99 (MBP83–99) immunodominant epitope conjugated to reduced mannan via the (KG)5 and keyhole limpet hemocyanin (KLH) bridge, respectively, were evaluated for their biological/immunological profiles in SJL/J mice. Of all the peptide analogs tested, linear MBP83–99(F91) and linear MBP83–99(Y91) conjugated to reduced mannan via a (KG)5 linker and cyclic MBP83–99(F91) conjugated to reduce mannan via KLH linker, yielded the best immunological profile and constitute novel candidates for further immunotherapeutic studies against MS in animal models and in human clinical trials. PMID:26082772
Brasquet, C.; Bourges, B.; Le Cloirec, P.
1999-12-01
The adsorption of 55 organic compounds is carried out onto a recently discovered adsorbent, activated carbon cloth. Isotherms are modeled using the Freundlich classical model, and the large database generated allows qualitative assumptions about the adsorption mechanism. However, to confirm these assumptions, a quantitative structure-property relationship methodology is used to assess the correlations between an adsorbability parameter (expressed using the Freundlich parameter K) and topological indices related to the compounds molecular structure (molecular connectivity indices, MCI). This correlation is set up by mean of two different statistical tools, multiple linear regression (MLR) and neural network (NN). A principal component analysis is carried out to generate new and uncorrelated variables. It enables the relations between the MCI to be analyzed, but the multiple linear regression assessed using the principal components (PCs) has a poor statistical quality and introduces high order PCs, too inaccurate for an explanation of the adsorption mechanism. The correlations are thus set up using the original variables (MCI), and both statistical tools, multiple linear regression and neutral network, are compared from a descriptive and predictive point of view. To compare the predictive ability of both methods, a test database of 10 organic compounds is used.
American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Arlington, Va: American Psychiatric Publishing. 2013. Powell AD. Grief, bereavement, and adjustment disorders. In: Stern TA, Rosenbaum ...
NASA Astrophysics Data System (ADS)
Fujita, Shigetaka; Harima, Takashi
2016-03-01
The mean flowfield of a linear array of multiple rectangular jets run through transversely with a two-dimensional jet, has been investigated, experimentally. The object of this experiment is to operate both the velocity scale and the length scale of the multiple rectangular jets using a two-dimensional jet. The reason of the adoption of this nozzle exit shape was caused by the reports of authors in which the cruciform nozzle promoted the inward secondary flows strongly on both the two jet axes. Aspect ratio of the rectangular nozzle used in this experiment was 12.5. Reynolds number based on the nozzle width d and the exit mean velocity Ue (≅ 39 m / s) was kept constant 25000. Longitudinal mean velocity was measured using an X-array Hot-Wire Probe (lh = 3.1 μm in diameter, dh = 0.6 mm effective length : dh / lh = 194) operated by the linearized constant temperature anemometers (DANTEC), and the spanwise and the lateral mean velocities were measured using a yaw meter. The signals from the anemometers were passed through the low-pass filters and sampled using A.D. converter. The processing of the signals was made by a personal computer. Acquisition time of the signals was usually 60 seconds. From this experiment, it was revealed that the magnitude of the inward secondary flows on both the y and z axes in the upstream region of the present jet was promoted by a two-dimensional jet which run through transversely perpendicular to the multiple rectangular jets, therefore the potential core length on the x axis of the present jet extended 2.3 times longer than that of the multiple rectangular jets, and the half-velocity width on the rectangular jet axis of the present jet was suppressed 41% shorter compared with that of the multiple rectangular jets.
Azadi, Sama; Karimi-Jashni, Ayoub
2016-02-01
Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. PMID:26482809
Pulse shape adjustment for the SLC damping ring kickers
Mattison, T.; Cassel, R.; Donaldson, A.; Fischer, H.; Gough, D.
1991-05-01
The difficulties with damping ring kickers that prevented operation of the SLAC Linear Collider in full multiple bunch mode have been overcome by shaping the current pulse to compensate for imperfections in the magnets. The risetime was improved by a peaking capacitor, with a tunable inductor to provide a locally flat pulse. The pulse was flattened by an adjustable droop inductor. Fine adjustment was provided by pulse forming line tuners driven by stepping motors. Further risetime improvement will be obtained by a saturating ferrite pulse sharpener. 4 refs., 3 figs.
Mahani, Mohamad Khayatzadeh; Chaloosi, Marzieh; Maragheh, Mohamad Ghanadi; Khanchi, Ali Reza; Afzali, Daryoush
2007-09-01
The oral acute in vivo toxicity of 32 amine and amide drugs was related to their structural-dependent properties. Genetic algorithm-partial least-squares and stepwise variable selection was applied to select of meaningful descriptors. Multiple linear regression (MLR), artificial neural network (ANN) and partial least square (PLS) models were created with selected descriptors. The predictive ability of all three models was evaluated and compared on a set of five drugs, which were not used in modeling steps. Average errors of 0.168, 0.169 and 0.259 were obtained for MLR, ANN and PLS, respectively. PMID:17878584
Bao, J Y
1991-04-01
The commonly used microforceps have a much greater opening distance and spring resistance than needed. A piece of plastic ring or rubber band can be used to adjust the opening distance and reduce most of the spring resistance, making the user feel more comfortable and less fatigued. PMID:2051437
Harry, Herbert H.
1989-01-01
Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.
Warwick, A.I.; Gough, D.E.; Meuth, H.
1988-11-01
A small-scale experimental accelerator called MBE-4 has been constructed to demonstrate the principle of a current-amplifying induction linac for multiple beams of heavy ions. Four beams of Cs{sup 1+}, initially at 200 keV and each with a current of 10 mA have been accelerated and amplified to a kinetic energy of 700 keV and currents of 90 mA apiece. Transverse focusing is achieved by means of electrostatic quadrupoles; longitudinally the current is amplified and the beam bunch is held together against the space charge forces by special time-dependent accelerating fields. We report on the methods developed for designing and implementing the accelerating pulses and on measurements of the transverse and longitudinal emittance of the accelerated beams. Current fluctuations and the longitudinal emittance are initially almost zero and increase as acceleration errors are accumulated. We discuss the final longitudinal emittance and the current fluctuations in the experiment in terms of their acceptability for a large heavy-ion-fusion driver. 17 refs., 23 figs., 3 tabs.
Lunøe, Kristoffer; Martínez-Sierra, Justo Giner; Gammelgaard, Bente; Alonso, J Ignacio García
2012-03-01
The analytical methodology for the in vivo study of selenium metabolism using two enriched selenium isotopes has been modified, allowing for the internal correction of spectral interferences and mass bias both for total selenium and speciation analysis. The method is based on the combination of an already described dual-isotope procedure with a new data treatment strategy based on multiple linear regression. A metabolic enriched isotope ((77)Se) is given orally to the test subject and a second isotope ((74)Se) is employed for quantification. In our approach, all possible polyatomic interferences occurring in the measurement of the isotope composition of selenium by collision cell quadrupole ICP-MS are taken into account and their relative contribution calculated by multiple linear regression after minimisation of the residuals. As a result, all spectral interferences and mass bias are corrected internally allowing the fast and independent quantification of natural abundance selenium ((nat)Se) and enriched (77)Se. In this sense, the calculation of the tracer/tracee ratio in each sample is straightforward. The method has been applied to study the time-related tissue incorporation of (77)Se in male Wistar rats while maintaining the (nat)Se steady-state conditions. Additionally, metabolically relevant information such as selenoprotein synthesis and selenium elimination in urine could be studied using the proposed methodology. In this case, serum proteins were separated by affinity chromatography while reverse phase was employed for urine metabolites. In both cases, (74)Se was used as a post-column isotope dilution spike. The application of multiple linear regression to the whole chromatogram allowed us to calculate the contribution of bromine hydride, selenium hydride, argon polyatomics and mass bias on the observed selenium isotope patterns. By minimising the square sum of residuals for the whole chromatogram, internal correction of spectral interferences and mass
ERIC Educational Resources Information Center
Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar
2005-01-01
The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be used as…
NASA Technical Reports Server (NTRS)
Whitlock, C. H., III
1977-01-01
Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.
NASA Astrophysics Data System (ADS)
Hsu, Fong-Fu; Lodhi, Irfan J.; Turk, John; Semenkovich, Clay F.
2014-08-01
We describe a linear ion-trap (LIT) multiple-stage (MSn) mass spectrometric approach towards differentiation of alkylacyl, alk-1-enylacyl- and diacyl-glycerophoscholines (PCs) as the [M - 15]- ions desorbed by electrospray ionization (ESI) in the negative-ion mode. The MS4 mass spectra of the [M - 15 - R2'CH = CO]- ions originated from the three PC subfamilies are readily distinguishable, resulting in unambiguous distinction of the lipid classes. This method is applied to two alkyl ether rich PC mixtures isolated from murine bone marrow neutrophils and kidney, respectively, to explore its utility in the characterization of complex PC mixture of biological origin, resulting in the realization of the detailed structures of the PC species, including various classes and many minor isobaric isomers.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
Wang, Yazhou; Li, Zong; Shao, Pengyu; Hao, Shilei; Wang, Wei; Yang, Qian; Wang, Bochu
2014-11-01
We have developed a novel drug delivery system with the swelling core for differential release of multiple drugs by emulsion electrospinning, in which the aqueous phase is composed of polyvinyl alcohol and the oil phase consists of poly(ε-caprolactone). The microscopy images indicate that the W/O nanofibers with swelling core structure are successfully prepared and the model drugs, Rhodamine B and bovine serum albumin, were encapsulated in the fibers. In vitro drug release study demonstrated that this core-sheath structure could significantly alleviate the initial drug burst release and provided a differential diffusion pathway to release. It could be found that the postponement of the maximum accumulated release of bovine serum albumin was found due to the presence of sodium citrate and different types of polyvinyl alcohol. This study would provide a basis for optimization of encapsulation conditions to control the release of multiple agents and ultimately be applied in cancer chemotherapy. PMID:25280686
NASA Astrophysics Data System (ADS)
Guo, P.; Huang, G. H.; Li, Y. P.
2010-01-01
In this study, an inexact fuzzy-chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is developed for flood diversion planning under multiple uncertainties. A concept of the distribution with fuzzy boundary interval probability is defined to address multiple uncertainties expressed as integration of intervals, fuzzy sets and probability distributions. IFCTIP integrates the inexact programming, two-stage stochastic programming, integer programming and fuzzy-stochastic programming within a general optimization framework. IFCTIP incorporates the pre-regulated water-diversion policies directly into its optimization process to analyze various policy scenarios; each scenario has different economic penalty when the promised targets are violated. More importantly, it can facilitate dynamic programming for decisions of capacity-expansion planning under fuzzy-stochastic conditions. IFCTIP is applied to a flood management system. Solutions from IFCTIP provide desired flood diversion plans with a minimized system cost and a maximized safety level. The results indicate that reasonable solutions are generated for objective function values and decision variables, thus a number of decision alternatives can be generated under different levels of flood flows.
Sun, Xiaowei; Li, Wei; Xie, Yulei; Huang, Guohe; Dong, Changjuan; Yin, Jianguang
2016-11-01
A model based on economic structure adjustment and pollutants mitigation was proposed and applied in Urumqi. Best-worst case analysis and scenarios analysis were performed in the model to guarantee the parameters accuracy, and to analyze the effect of changes of emission reduction styles. Results indicated that pollutant-mitigations of electric power industry, iron and steel industry, and traffic relied mainly on technological transformation measures, engineering transformation measures and structure emission reduction measures, respectively; Pollutant-mitigations of cement industry relied mainly on structure emission reduction measures and technological transformation measures; Pollutant-mitigations of thermal industry relied mainly on the four mitigation measures. They also indicated that structure emission reduction was a better measure for pollutants mitigation of Urumqi. Iron and steel industry contributed greatly in SO2, NOx and PM (particulate matters) emission reduction and should be given special attention in pollutants emission reduction. In addition, the scales of iron and steel industry should be reduced with the decrease of SO2 mitigation amounts. The scales of traffic and electric power industry should be reduced with the decrease of NOx mitigation amounts, and the scales of cement industry and iron and steel industry should be reduced with the decrease of PM mitigation amounts. The study can provide references of pollutants mitigation schemes to decision-makers for regional economic and environmental development in the 12th Five-Year Plan on National Economic and Social Development of Urumqi. PMID:27454097
Depuydt, Christophe E; Thys, Sofie; Beert, Johan; Jonckheere, Jef; Salembier, Geert; Bogers, Johannes J
2016-11-01
Persistent high-risk human papillomavirus (HPV) infection is strongly associated with development of high-grade cervical intraepithelial neoplasia or cancer (CIN3+). In single type infections, serial type-specific viral-load measurements predict the natural history of the infection. In infections with multiple HPV-types, the individual type-specific viral-load profile could distinguish progressing HPV-infections from regressing infections. A case-cohort natural history study was established using samples from untreated women with multiple HPV-infections who developed CIN3+ (n = 57) or cleared infections (n = 88). Enriched cell pellet from liquid based cytology samples were subjected to a clinically validated real-time qPCR-assay (18 HPV-types). Using serial type-specific viral-load measurements (≥3) we calculated HPV-specific slopes and coefficient of determination (R(2) ) by linear regression. For each woman slopes and R(2) were used to calculate which HPV-induced processes were ongoing (progression, regression, serial transient, transient). In transient infections with multiple HPV-types, each single HPV-type generated similar increasing (0.27copies/cell/day) and decreasing (-0.27copies/cell/day) viral-load slopes. In CIN3+, at least one of the HPV-types had a clonal progressive course (R(2) ≥ 0.85; 0.0025copies/cell/day). In selected CIN3+ cases (n = 6), immunostaining detecting type-specific HPV 16, 31, 33, 58 and 67 RNA showed an even staining in clonal populations (CIN3+), whereas in transient virion-producing infections the RNA-staining was less in the basal layer compared to the upper layer where cells were ready to desquamate and release newly-formed virions. RNA-hybridization patterns matched the calculated ongoing processes measured by R(2) and slope in serial type-specific viral-load measurements preceding the biopsy. In women with multiple HPV-types, serial type-specific viral-load measurements predict the natural history of the
A comparative evaluation of methods of adjusting GPA for differences in grade assignment practices.
Lei, Pui-Wa; Bassiri, Dina; Schulz, E Matthew
2003-01-01
Numerous methods have been proposed for constructing an adjusted grade point average (adjusted-GPA) that controls for differences in grading standards across college courses and departments. Compared to the raw GPA, adjusted-GPA measures are generally more predictable from preadmissions variables, such as standardized tests and high school achievement. Relative rankings of students on adjusted-GPA measures are also more consistent with their relative standings within courses. This study compared the performance of 4 polytomous IRT and 3 linear models for constructing adjusted-GPA measures. Unlike previous studies, the regression weights of predictor variables and the course parameter estimates used to compute adjusted-GPA were cross-validated. Adjusted-GPA retained noticeable advantages over raw GPA on cross-validation. The largest advantages were seen in the multiple correlation of adjusted-GPA with preadmission variables, when adjusted-GPA was constructed with the rating scale and partial credit IRT models. The cross-validity of adjusted-GPA was the weakest with the graded response model. PMID:12700432
Rudashevskaya, Elena L; Breitwieser, Florian P; Huber, Marie L; Colinge, Jacques; Müller, André C; Bennett, Keiryn L
2013-02-01
The identification and validation of cross-linked peptides by mass spectrometry remains a daunting challenge for protein-protein cross-linking approaches when investigating protein interactions. This includes the fragmentation of cross-linked peptides in the mass spectrometer per se and following database searching, the matching of the molecular masses of the fragment ions to the correct cross-linked peptides. The hybrid linear trap quadrupole (LTQ) Orbitrap Velos combines the speed of the tandem mass spectrometry (MS/MS) duty circle with high mass accuracy, and these features were utilized in the current study to substantially improve the confidence in the identification of cross-linked peptides. An MS/MS method termed multiple and sequential data acquisition method (MSDAM) was developed. Preliminary optimization of the MS/MS settings was performed with a synthetic peptide (TP1) cross-linked with bis[sulfosuccinimidyl] suberate (BS(3)). On the basis of these results, MSDAM was created and assessed on the BS(3)-cross-linked bovine serum albumin (BSA) homodimer. MSDAM applies a series of multiple sequential fragmentation events with a range of different normalized collision energies (NCE) to the same precursor ion. The combination of a series of NCE enabled a considerable improvement in the quality of the fragmentation spectra for cross-linked peptides, and ultimately aided in the identification of the sequences of the cross-linked peptides. Concurrently, MSDAM provides confirmatory evidence from the formation of reporter ions fragments, which reduces the false positive rate of incorrectly assigned cross-linked peptides. PMID:23301806
Herrig, Ilona M; Böer, Simone I; Brennholt, Nicole; Manz, Werner
2015-11-15
Since rivers are typically subject to rapid changes in microbiological water quality, tools are needed to allow timely water quality assessment. A promising approach is the application of predictive models. In our study, we developed multiple linear regression (MLR) models in order to predict the abundance of the fecal indicator organisms Escherichia coli (EC), intestinal enterococci (IE) and somatic coliphages (SC) in the Lahn River, Germany. The models were developed on the basis of an extensive set of environmental parameters collected during a 12-months monitoring period. Two models were developed for each type of indicator: 1) an extended model including the maximum number of variables significantly explaining variations in indicator abundance and 2) a simplified model reduced to the three most influential explanatory variables, thus obtaining a model which is less resource-intensive with regard to required data. Both approaches have the ability to model multiple sites within one river stretch. The three most important predictive variables in the optimized models for the bacterial indicators were NH4-N, turbidity and global solar irradiance, whereas chlorophyll a content, discharge and NH4-N were reliable model variables for somatic coliphages. Depending on indicator type, the extended mode models also included the additional variables rainfall, O2 content, pH and chlorophyll a. The extended mode models could explain 69% (EC), 74% (IE) and 72% (SC) of the observed variance in fecal indicator concentrations. The optimized models explained the observed variance in fecal indicator concentrations to 65% (EC), 70% (IE) and 68% (SC). Site-specific efficiencies ranged up to 82% (EC) and 81% (IE, SC). Our results suggest that MLR models are a promising tool for a timely water quality assessment in the Lahn area. PMID:26318647
NASA Astrophysics Data System (ADS)
Campbell, John L.; Heirwegh, Christopher M.; Ganly, Brianna
2016-09-01
Spectra from the laboratory and flight versions of the Curiosity rover's alpha particle X-ray spectrometer were fitted with an in-house version of GUPIX, revealing departures from linear behavior of the energy-channel relationships in the low X-ray energy region where alpha particle PIXE is the dominant excitation mechanism. The apparent energy shifts for the lightest elements present were attributed in part to multiple ionization satellites and in part to issues within the detector and/or the pulse processing chain. No specific issue was identified, but the second of these options was considered to be the more probable. Approximate corrections were derived and then applied within the GUAPX code which is designed specifically for quantitative evaluation of APXS spectra. The quality of fit was significantly improved. The peak areas of the light elements Na, Mg, Al and Si were changed by only a few percent in most spectra. The changes for elements with higher atomic number were generally smaller, with a few exceptions. Overall, the percentage peak area changes are much smaller than the overall uncertainties in derived concentrations, which are largely attributable to the effects of rock heterogeneity. The magnitude of the satellite contributions suggests the need to incorporate these routinely in accelerator-based PIXE using helium beams.
Hu, L.; Liang, M.; Mouraux, A.; Wise, R. G.; Hu, Y.
2011-01-01
Across-trial averaging is a widely used approach to enhance the signal-to-noise ratio (SNR) of event-related potentials (ERPs). However, across-trial variability of ERP latency and amplitude may contain physiologically relevant information that is lost by across-trial averaging. Hence, we aimed to develop a novel method that uses 1) wavelet filtering (WF) to enhance the SNR of ERPs and 2) a multiple linear regression with a dispersion term (MLRd) that takes into account shape distortions to estimate the single-trial latency and amplitude of ERP peaks. Using simulated ERP data sets containing different levels of noise, we provide evidence that, compared with other approaches, the proposed WF+MLRd method yields the most accurate estimate of single-trial ERP features. When applied to a real laser-evoked potential data set, the WF+MLRd approach provides reliable estimation of single-trial latency, amplitude, and morphology of ERPs and thereby allows performing meaningful correlations at single-trial level. We obtained three main findings. First, WF significantly enhances the SNR of single-trial ERPs. Second, MLRd effectively captures and measures the variability in the morphology of single-trial ERPs, thus providing an accurate and unbiased estimate of their peak latency and amplitude. Third, intensity of pain perception significantly correlates with the single-trial estimates of N2 and P2 amplitude. These results indicate that WF+MLRd can be used to explore the dynamics between different ERP features, behavioral variables, and other neuroimaging measures of brain activity, thus providing new insights into the functional significance of the different brain processes underlying the brain responses to sensory stimuli. PMID:21880936
Lewis, Myles J.; Vyse, Simon; Shields, Adrian M.; Boeltz, Sebastian; Gordon, Patrick A.; Spector, Timothy D.; Lehner, Paul J.; Walczak, Henning; Vyse, Timothy J.
2015-01-01
UBE2L3 is associated with increased susceptibility to numerous autoimmune diseases, but the underlying mechanism is unexplained. By using data from a genome-wide association study of systemic lupus erythematosus (SLE), we observed a single risk haplotype spanning UBE2L3, consistently aligned across multiple autoimmune diseases, associated with increased UBE2L3 expression in B cells and monocytes. rs140490 in the UBE2L3 promoter region showed the strongest association. UBE2L3 is an E2 ubiquitin-conjugating enzyme, specially adapted to function with HECT and RING-in-between-RING (RBR) E3 ligases, including HOIL-1 and HOIP, components of the linear ubiquitin chain assembly complex (LUBAC). Our data demonstrate that UBE2L3 is the preferred E2 conjugating enzyme for LUBAC in vivo, and UBE2L3 is essential for LUBAC-mediated activation of NF-κB. By accurately quantifying NF-κB translocation in primary human cells from healthy individuals stratified by rs140490 genotype, we observed that the autoimmune disease risk UBE2L3 genotype was correlated with basal NF-κB activation in unstimulated B cells and monocytes and regulated the sensitivity of NF-κB to CD40 stimulation in B cells and TNF stimulation in monocytes. The UBE2L3 risk allele correlated with increased circulating plasmablast and plasma cell numbers in SLE individuals, consistent with substantially elevated UBE2L3 protein levels in plasmablasts and plasma cells. These results identify key immunological consequences of the UBE2L3 autoimmune risk haplotype and highlight an important role for UBE2L3 in plasmablast and plasma cell development. PMID:25640675
NASA Astrophysics Data System (ADS)
Barbu, N.; Cuculeanu, V.; Stefan, S.
2015-08-01
The aim of this study is to investigate the relationship between the frequency of very warm days (TX90p) in Romania and large-scale atmospheric circulation for winter (December-February) and summer (June-August) between 1962 and 2010. In order to achieve this, two catalogues from COST733Action were used to derive daily circulation types. Seasonal occurrence frequencies of the circulation types were calculated and have been utilized as predictors within the multiple linear regression model (MLRM) for the estimation of winter and summer TX90p values for 85 synoptic stations covering the entire Romania. A forward selection procedure has been utilized to find adequate predictor combinations and those predictor combinations were tested for collinearity. The performance of the MLRMs has been quantified based on the explained variance. Furthermore, the leave-one-out cross-validation procedure was applied and the root-mean-squared error skill score was calculated at station level in order to obtain reliable evidence of MLRM robustness. From this analysis, it can be stated that the MLRM performance is higher in winter compared to summer. This is due to the annual cycle of incoming insolation and to the local factors such as orography and surface albedo variations. The MLRM performances exhibit distinct variations between regions with high performance in wintertime for the eastern and southern part of the country and in summertime for the western part of the country. One can conclude that the MLRM generally captures quite well the TX90p variability and reveals the potential for statistical downscaling of TX90p values based on circulation types.
Tatituri, Raju Venkata Veera; Brenner, Michael B.; Turk, John; Hsu, Fong-Fu
2013-01-01
The cell wall of the pathogenic bacterium Streptococcus pneumoniae (S. pneumoniae) contains glucopyranosyl diacylglycerol (GlcDAG) and galactoglucopyranosyldiacylglycerol (GalGlcDAG). The specific GlcDAG consisting of vaccenic acid substituent at sn-2 was recently identified as another glycolipid antigen family recognized by invariant natural killer T cells (iNKT cells). Here, we describe a linear ion-trap (LIT) multiple-stage (MSn) mass spectrometric approach towards structural analysis of GalGlcDAG and GlcDAG. Structural information derived from MSn (n = 2,3) on the [M + Li]+ adduct ions desorbed by electrospray ionization (ESI) affords identification of the fatty acid substituents, assignment of the fatty acyl groups on the glycerol backbone, as well as the location of double bond along the fatty acyl chain. The identification of the fatty acyl groups and determination of their regio-specificity were confirmed by MSn (n = 2,3) on the [M + NH4]+ ions. We establish the structures of GalGlcDAG and GlcDAG isolated from S. pneumoniae, in which the major species consists of a 16:1- or 18:1-fatty acid substituent mainly at sn-2, and the double bond of the fatty acid is located at ω-7 (n-7). More than one isomers were found for each mass in the family. This mass spectrometric approach provides a simple method to achieve structure identification of this important lipid family that would be very difficult to define using the traditional method. PMID:22282097
NASA Astrophysics Data System (ADS)
Ibanez, C. A. G.; Carcellar, B. G., III; Paringit, E. C.; Argamosa, R. J. L.; Faelga, R. A. G.; Posilero, M. A. V.; Zaragosa, G. P.; Dimayacyac, N. A.
2016-06-01
Diameter-at-Breast-Height Estimation is a prerequisite in various allometric equations estimating important forestry indices like stem volume, basal area, biomass and carbon stock. LiDAR Technology has a means of directly obtaining different forest parameters, except DBH, from the behavior and characteristics of point cloud unique in different forest classes. Extensive tree inventory was done on a two-hectare established sample plot in Mt. Makiling, Laguna for a natural growth forest. Coordinates, height, and canopy cover were measured and types of species were identified to compare to LiDAR derivatives. Multiple linear regression was used to get LiDAR-derived DBH by integrating field-derived DBH and 27 LiDAR-derived parameters at 20m, 10m, and 5m grid resolutions. To know the best combination of parameters in DBH Estimation, all possible combinations of parameters were generated and automated using python scripts and additional regression related libraries such as Numpy, Scipy, and Scikit learn were used. The combination that yields the highest r-squared or coefficient of determination and lowest AIC (Akaike's Information Criterion) and BIC (Bayesian Information Criterion) was determined to be the best equation. The equation is at its best using 11 parameters at 10mgrid size and at of 0.604 r-squared, 154.04 AIC and 175.08 BIC. Combination of parameters may differ among forest classes for further studies. Additional statistical tests can be supplemented to help determine the correlation among parameters such as Kaiser- Meyer-Olkin (KMO) Coefficient and the Barlett's Test for Spherecity (BTS).
NASA Astrophysics Data System (ADS)
OMEGA Science Team; Combe, J.-Ph.; Le Mouélic, S.; Sotin, C.; Gendrin, A.; Mustard, J. F.; Le Deit, L.; Launeau, P.; Bibring, J.-P.; Gondet, B.; Langevin, Y.; Pinet, P.; OMEGA Science Team
2008-05-01
The mineralogical composition of the Martian surface is investigated by a Multiple-Endmember Linear Spectral Unmixing Model (MELSUM) of the Observatoire pour la Minéralogie, l'Eau, les Glaces et l'Activité (OMEGA) imaging spectrometer onboard Mars Express. OMEGA has fully covered the surface of the red planet at medium to low resolution (2-4 km per pixel). Several areas have been imaged at a resolution up to 300 m per pixel. One difficulty in the data processing is to extract the mineralogical composition, since rocks are mixtures of several components. MELSUM is an algorithm that selects the best linear combination of spectra among the families of minerals available in a reference library. The best fit of the observed spectrum on each pixel is calculated by the same unmixing equation used in the classical Spectral Mixture Analysis (SMA). This study shows the importance of the choice of the input library, which contains in our case 24 laboratory spectra (endmembers) of minerals that cover the diversity of the mineral families that may be found on the Martian surface. The analysis is restricted to the 1.0-2.5 μm wavelength range. Grain size variations and atmospheric scattering by aerosols induce changes in overall albedo level and continuum slopes. Synthetic flat and pure slope spectra have therefore been included in the input mineral spectral endmembers library in order to take these effects into account. The selection process for the endmembers is a systematic exploration of whole set of combinations of four components plus the straight line spectra. When negative coefficients occur, the results are discarded. This strategy is successfully tested on the terrestrial Cuprite site (Nevada, USA), for which extensive ground observations exist. It is then applied to different areas on Mars including Syrtis Major, Aram Chaos and Olympia Undae near the North Polar Cap. MELSUM on Syrtis Major reveals a region dominated by mafic minerals, with the oldest crustal regions
Crestani, Marco G; Hickey, Anne K; Gao, Xinfeng; Pinter, Balazs; Cavaliere, Vincent N; Ito, Jun-Ichi; Chen, Chun-Hsing; Mindiola, Daniel J
2013-10-01
The transient titanium neopentylidyne, [(PNP)Ti≡C(t)Bu] (A; PNP(-)≡N[2-P(i)Pr2-4-methylphenyl]2(-)), dehydrogenates ethane to ethylene at room temperature over 24 h, by sequential 1,2-CH bond addition and β-hydrogen abstraction to afford [(PNP)Ti(η(2)-H2C═CH2)(CH2(t)Bu)] (1). Intermediate A can also dehydrogenate propane to propene, albeit not cleanly, as well as linear and volatile alkanes C4-C6 to form isolable α-olefin complexes of the type, [(PNP)Ti(η(2)-H2C═CHR)(CH2(t)Bu)] (R = CH3 (2), CH2CH3 (3), (n)Pr (4), and (n)Bu (5)). Complexes 1-5 can be independently prepared from [(PNP)Ti═CH(t)Bu(OTf)] and the corresponding alkylating reagents, LiCH2CHR (R = H, CH3(unstable), CH2CH3, (n)Pr, and (n)Bu). Olefin complexes 1 and 3-5 have all been characterized by a diverse array of multinuclear NMR spectroscopic experiments including (1)H-(31)P HOESY, and in the case of the α-olefin adducts 2-5, formation of mixtures of two diastereomers (each with their corresponding pair of enantiomers) has been unequivocally established. The latter has been spectroscopically elucidated by NMR via C-H coupled and decoupled (1)H-(13)C multiplicity edited gHSQC, (1)H-(31)P HMBC, and dqfCOSY experiments. Heavier linear alkanes (C7 and C8) are also dehydrogenated by A to form [(PNP)Ti(η(2)-H2C═CH(n)Pentyl)(CH2(t)Bu)] (6) and [(PNP)Ti(η(2)-H2C═CH(n)Hexyl)(CH2(t)Bu)] (7), respectively, but these species are unstable but can exchange with ethylene (1 atm) to form 1 and the free α-olefin. Complex 1 exchanges with D2C═CD2 with concomitant release of H2C═CH2. In addition, deuterium incorporation is observed in the neopentyl ligand as a result of this process. Cyclohexane and methylcyclohexane can be also dehydrogenated by transient A, and in the case of cyclohexane, ethylene (1 atm) can trap the [(PNP)Ti(CH2(t)Bu)] fragment to form 1. Dehydrogenation of the alkane is not rate-determining since pentane and pentane-d12 can be dehydrogenated to 4 and 4-d12 with comparable
Electrical characterization of special purpose linear microcircuits
NASA Astrophysics Data System (ADS)
Kulpinski, J. S.; Simonsen, T.; Carrozza, L.; Mossman, R.; Dunn, J.
1980-05-01
This report covers the work performed by General Electric Ordnance Systems pertaining to the electrical characterization and specification of linear microcircuits. The period of report is August 1978 to December 1979. This technical report is divided into chapters covering specific device types with electrical characterization results. The following device types/families were characterized Adjustable Positive Voltage Regulators, Adjustable Negative Voltage Regulators. Precision B1 Fet Op Amps, Multiple B1 Fet Op Amps, 12 Bit A/D Converters, 12 Bit D/A Converters, Precision Voltage References, and Precision Sample/Hold Amplifiers. Data obtained during device characterization is published in handbook form obtainable under separate cover from this document. Samples of data sheets, histograms, and plots, are included in this report, however.
Resistors Improve Ramp Linearity
NASA Technical Reports Server (NTRS)
Kleinberg, L. L.
1982-01-01
Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.
Li, L; Kleinman, K; Gillman, M W
2014-12-01
We implemented six confounding adjustment methods: (1) covariate-adjusted regression, (2) propensity score (PS) regression, (3) PS stratification, (4) PS matching with two calipers, (5) inverse probability weighting and (6) doubly robust estimation to examine the associations between the body mass index (BMI) z-score at 3 years and two separate dichotomous exposure measures: exclusive breastfeeding v. formula only (n=437) and cesarean section v. vaginal delivery (n=1236). Data were drawn from a prospective pre-birth cohort study, Project Viva. The goal is to demonstrate the necessity and usefulness, and approaches for multiple confounding adjustment methods to analyze observational data. Unadjusted (univariate) and covariate-adjusted linear regression associations of breastfeeding with BMI z-score were -0.33 (95% CI -0.53, -0.13) and -0.24 (-0.46, -0.02), respectively. The other approaches resulted in smaller n (204-276) because of poor overlap of covariates, but CIs were of similar width except for inverse probability weighting (75% wider) and PS matching with a wider caliper (76% wider). Point estimates ranged widely, however, from -0.01 to -0.38. For cesarean section, because of better covariate overlap, the covariate-adjusted regression estimate (0.20) was remarkably robust to all adjustment methods, and the widths of the 95% CIs differed less than in the breastfeeding example. Choice of covariate adjustment method can matter. Lack of overlap in covariate structure between exposed and unexposed participants in observational studies can lead to erroneous covariate-adjusted estimates and confidence intervals. We recommend inspecting covariate overlap and using multiple confounding adjustment methods. Similar results bring reassurance. Contradictory results suggest issues with either the data or the analytic method. PMID:25171142
Kleinman, Ken; Gillman, Matthew W.
2014-01-01
We implemented 6 confounding adjustment methods: 1) covariate-adjusted regression, 2) propensity score (PS) regression, 3) PS stratification, 4) PS matching with two calipers, 5) inverse-probability-weighting, and 6) doubly-robust estimation to examine the associations between the BMI z-score at 3 years and two separate dichotomous exposure measures: exclusive breastfeeding versus formula only (N = 437) and cesarean section versus vaginal delivery (N = 1236). Data were drawn from a prospective pre-birth cohort study, Project Viva. The goal is to demonstrate the necessity and usefulness, and approaches for multiple confounding adjustment methods to analyze observational data. Unadjusted (univariate) and covariate-adjusted linear regression associations of breastfeeding with BMI z-score were −0.33 (95% CI −0.53, −0.13) and −0.24 (−0.46, −0.02), respectively. The other approaches resulted in smaller N (204 to 276) because of poor overlap of covariates, but CIs were of similar width except for inverse-probability-weighting (75% wider) and PS matching with a wider caliper (76% wider). Point estimates ranged widely, however, from −0.01 to −0.38. For cesarean section, because of better covariate overlap, the covariate-adjusted regression estimate (0.20) was remarkably robust to all adjustment methods, and the widths of the 95% CIs differed less than in the breastfeeding example. Choice of covariate adjustment method can matter. Lack of overlap in covariate structure between exposed and unexposed participants in observational studies can lead to erroneous covariate-adjusted estimates and confidence intervals. We recommend inspecting covariate overlap and using multiple confounding adjustment methods. Similar results bring reassurance. Contradictory results suggest issues with either the data or the analytic method. PMID:25171142
ADJUSTABLE DOUBLE PULSE GENERATOR
Gratian, J.W.; Gratian, A.C.
1961-08-01
>A modulator pulse source having adjustable pulse width and adjustable pulse spacing is described. The generator consists of a cross coupled multivibrator having adjustable time constant circuitry in each leg, an adjustable differentiating circuit in the output of each leg, a mixing and rectifying circuit for combining the differentiated pulses and generating in its output a resultant sequence of negative pulses, and a final amplifying circuit for inverting and square-topping the pulses. (AEC)
Adjustable sutures in children.
Engel, J Mark; Guyton, David L; Hunter, David G
2014-06-01
Although adjustable sutures are considered a standard technique in adult strabismus surgery, most surgeons are hesitant to attempt the technique in children, who are believed to be unlikely to cooperate for postoperative assessment and adjustment. Interest in using adjustable sutures in pediatric patients has increased with the development of surgical techniques specific to infants and children. This workshop briefly reviews the literature supporting the use of adjustable sutures in children and presents the approaches currently used by three experienced strabismus surgeons. PMID:24924284
Ratios as a size adjustment in morphometrics.
Albrecht, G H; Gelvin, B R; Hartman, S E
1993-08-01
Simple ratios in which a measurement variable is divided by a size variable are commonly used but known to be inadequate for eliminating size correlations from morphometric data. Deficiencies in the simple ratio can be alleviated by incorporating regression coefficients describing the bivariate relationship between the measurement and size variables. Recommendations have included: 1) subtracting the regression intercept to force the bivariate relationship through the origin (intercept-adjusted ratios); 2) exponentiating either the measurement or the size variable using an allometry coefficient to achieve linearity (allometrically adjusted ratios); or 3) both subtracting the intercept and exponentiating (fully adjusted ratios). These three strategies for deriving size-adjusted ratios imply different data models for describing the bivariate relationship between the measurement and size variables (i.e., the linear, simple allometric, and full allometric models, respectively). Algebraic rearrangement of the equation associated with each data model leads to a correctly formulated adjusted ratio whose expected value is constant (i.e., size correlation is eliminated). Alternatively, simple algebra can be used to derive an expected value function for assessing whether any proposed ratio formula is effective in eliminating size correlations. Some published ratio adjustments were incorrectly formulated as indicated by expected values that remain a function of size after ratio transformation. Regression coefficients incorporated into adjusted ratios must be estimated using least-squares regression of the measurement variable on the size variable. Use of parameters estimated by any other regression technique (e.g., major axis or reduced major axis) results in residual correlations between size and the adjusted measurement variable. Correctly formulated adjusted ratios, whose parameters are estimated by least-squares methods, do control for size correlations. The size-adjusted
Komiyama, Hideaki; Adachi, Chihaya; Yasuda, Takuma
2016-01-01
Solution-processable star-shaped and linear π-conjugated oligomers consisting of an electron-donating tetrathienoanthracene (TTA) core and electron-accepting diketopyrrolopyrrole (DPP) arms, namely, TTA-DPP4 and TTA-DPP2, were designed and synthesized. Based on density functional theory calculations, the star-shaped TTA-DPP4 has a larger oscillator strength than the linear TTA-DPP2, and consequently, better photoabsorption property over a wide range of visible wavelengths. The photovoltaic properties of organic solar cells based on TTA-DPP4 and TTA-DPP2 with a fullerene derivative were evaluated by varying the thickness of the bulk heterojunction active layer. As a result of the enhanced visible absorption properties of the star-shaped π-conjugated structure, better photovoltaic performances were obtained with relatively thin active layers (40-60 nm). PMID:27559398
Adachi, Chihaya
2016-01-01
Summary Solution-processable star-shaped and linear π-conjugated oligomers consisting of an electron-donating tetrathienoanthracene (TTA) core and electron-accepting diketopyrrolopyrrole (DPP) arms, namely, TTA-DPP4 and TTA-DPP2, were designed and synthesized. Based on density functional theory calculations, the star-shaped TTA-DPP4 has a larger oscillator strength than the linear TTA-DPP2, and consequently, better photoabsorption property over a wide range of visible wavelengths. The photovoltaic properties of organic solar cells based on TTA-DPP4 and TTA-DPP2 with a fullerene derivative were evaluated by varying the thickness of the bulk heterojunction active layer. As a result of the enhanced visible absorption properties of the star-shaped π-conjugated structure, better photovoltaic performances were obtained with relatively thin active layers (40–60 nm). PMID:27559398
Yao, Ming; Ma, Li; Humphreys, W Griffith; Zhu, Mingshe
2008-10-01
A novel LC/MS/MS method that uses multiple ion monitoring (MIM) as a survey scan to trigger the acquisition of enhanced product ions (EPI) on a hybrid quadrupole-linear ion trap mass spectrometer (Q TRAP) was developed for drug metabolite identification. In the MIM experiment, multiple predicted metabolite ions were monitored in both Q1 and Q3. The collision energy in Q2 was set to a low value to minimize fragmentation. Results from analyzing ritonavir metabolites in rat hepatocytes demonstrate that MIM-EPI was capable of targeting a larger number of metabolites regardless of their fragmentation and retained sensitivity and duty cycle similar to multiple reaction monitoring (MRM)-EPI. MIM-based scanning methods were shown to be particularly useful in several applications. First, MIM-EPI enabled the sensitive detection and MS/MS acquisition of up to 100 predicted metabolites. Second, MIM-MRM-EPI was better than MRM-EPI in the analysis of metabolites that undergo either predictable or unpredictable fragmentation pathways. Finally, a combination of MIM-EPI and full-scan MS (EMS), as an alternative to EMS-EPI, was well suited for routine in vitro metabolite profiling. Overall, MIM-EPI significantly enhanced the metabolite identification capability of the hybrid triple quadrupole-linear ion trap LC/MS. PMID:18416441
NASA Astrophysics Data System (ADS)
Sidorin, Anatoly
2010-01-01
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Sidorin, Anatoly
2010-01-05
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Carmody, Karen Appleyard; Haskett, Mary E.; Loehman, Jessisca; Rose, Roderick A
2015-01-01
Childhood physical abuse predicts emotional/behavioral, self-regulatory, and social problems. Yet factors from multiple ecological levels contribute to children’s adjustment. The purpose of this study was to examine the degree to which the social-emotional adjustment of physically abused children in first grade would be predicted by a set of child-, parent-, and family-level predictors in kindergarten. Drawing on a short-term longitudinal study of 92 physically abused children and their primary caregivers, the current study used linear regression to examine early childhood child (i.e., gender, IQ, child perceptions of maternal acceptance), parent (i.e., parental mental health), and family relationship (i.e., sensitive parenting, hostile parenting, family conflict) factors as predictors of first grade internalizing and externalizing symptomatology, emotion dysregulation, and negative peer interactions. We used a multi-method, multi-informant approach to measuring predictors and children’s adjustment. Internalizing symptomatology was significantly predicted by child IQ, parental mental health, and family conflict. Externalizing symptomatology and emotion dysregulation were predicted by child IQ. Although a large proportion of variance in measures of adjustment was accounted for by the set of predictors, few individual variables were unique predictors of child adjustment. Variability in the predictors of adjustment for physically abused children underscores the need for individualized treatment approaches. PMID:26401095
Longitudinal Adjustment Trajectories of International Students and Their Predictors
ERIC Educational Resources Information Center
Hirai, Reiko
2013-01-01
Despite the increasing number of international students in U.S. universities, the course of adjustment of international students has not been adequately tested and only one study to date has examined multiple trajectories of international students' adjustment. Therefore, the first goal of the current study was to explore multiple trajectories of…
NASA Astrophysics Data System (ADS)
Saito, Yoshiyuki; Yasuhara, Masakatsu; Mabuchi, Yuichi; Matsushima, Tohlu; Hisakado, Takashi; Wada, Osami
An EMC macro-model for LSIs, named the LECCS-core model, is under development for simulating high frequency noise in power supply currents. In this paper, the conventional LECCS-core model is extended by adding resistances in the ground connection of an LSI, in order to separate the core block and the analog block. The model parameters are identified using symbolic analysis and least-square optimization. Using this new model, the transfer impedances between different power supply pins can be simulated accurately. Additionally we derived the equivalent internal current sources by using that model. As a result, we confirmed that the internal current sources were improved. In conclusion, we confirmed that the configuration of the linear equivalent circuit and our modeling method can be applied widely to microcontrollers of the same block configuration.
NASA Technical Reports Server (NTRS)
Bielawa, R. L.
1976-01-01
The differential equations of motion for the lateral and torsional deformations of a nonlinearly twisted rotor blade in steady flight conditions together with those additional aeroelastic features germane to composite bearingless rotors are derived. The differential equations are formulated in terms of uncoupled (zero pitch and twist) vibratory modes with exact coupling effects due to finite, time variable blade pitch and, to second order, twist. Also presented are derivations of the fully coupled inertia and aerodynamic load distributions, automatic pitch change coupling effects, structural redundancy characteristics of the composite bearingless rotor flexbeam - torque tube system in bending and torsion, and a description of the linearized equations appropriate for eigensolution analyses. Three appendixes are included presenting material appropriate to the digital computer program implementation of the analysis, program G400.
NASA Astrophysics Data System (ADS)
Koloc, Z.; Korf, J.; Kavan, P.
The adjustment (modification) deals with gear chains intermediating (transmitting) motion transfer between the sprocket wheels on parallel shafts. The purpose of the adjustments of chain gear is to remove the unwanted effects by using the chain guide on the links (sliding guide rail) ensuring a smooth fit of the chain rollers into the wheel tooth gap.
Adjustment to Recruit Training.
ERIC Educational Resources Information Center
Anderson, Betty S.
The thesis examines problems of adjustment encountered by new recruits entering the military services. Factors affecting adjustment are discussed: the recruit training staff and environment, recruit background characteristics, the military's image, the changing values and motivations of today's youth, and the recruiting process. Sources of…
NASA Astrophysics Data System (ADS)
Decin, L.; Cox, N. L. J.; Royer, P.; Van Marle, A. J.; Vandenbussche, B.; Ladjal, D.; Kerschbaum, F.; Ottensamer, R.; Barlow, M. J.; Blommaert, J. A. D. L.; Gomez, H. L.; Groenewegen, M. A. T.; Lim, T.; Swinyard, B. M.; Waelkens, C.; Tielens, A. G. G. M.
2012-12-01
Context. The interaction between stellar winds and the interstellar medium (ISM) can create complex bow shocks. The photometers on board the Herschel Space Observatory are ideally suited to studying the morphologies of these bow shocks. Aims: We aim to study the circumstellar environment and wind-ISM interaction of the nearest red supergiant, Betelgeuse. Methods.Herschel PACS images at 70, 100, and 160 μm and SPIRE images at 250, 350, and 500 μm were obtained by scanning the region around Betelgeuse. These data were complemented with ultraviolet GALEX data, near-infrared WISE data, and radio 21 cm GALFA-HI data. The observational properties of the bow shock structure were deduced from the data and compared with hydrodynamical simulations. Results: The infrared Herschel images of the environment around Betelgeuse are spectacular, showing the occurrence of multiple arcs at ~6-7' from the central target and the presence of a linear bar at ~9'. Remarkably, no large-scale instabilities are seen in the outer arcs and linear bar. The dust temperature in the outer arcs varies between 40 and 140 K, with the linear bar having the same colour temperature as the arcs. The inner envelope shows clear evidence of a non-homogeneous clumpy structure (beyond 15''), probably related to the giant convection cells of the outer atmosphere. The non-homogeneous distribution of the material even persists until the collision with the ISM. A strong variation in brightness of the inner clumps at a radius of ~2' suggests a drastic change in mean gas and dust density ~32 000 yr ago. Using hydrodynamical simulations, we try to explain the observed morphology of the bow shock around Betelgeuse. Conclusions: Different hypotheses, based on observational and theoretical constraints, are formulated to explain the origin of the multiple arcs and the linear bar and the fact that no large-scale instabilities are visible in the bow shock region. We infer that the two main ingredients for explaining
Chua, Alicia S.; Egorova, Svetlana; Anderson, Mark C.; Polgar-Turcsanyi, Mariann; Chitnis, Tanuja; Weiner, Howard L.; Guttmann, Charles R.G.; Bakshi, Rohit; Healy, Brian C.
2015-01-01
Magnetic resonance imaging (MRI) of the brain provides important outcome measures in the longitudinal evaluation of disease activity and progression in MS subjects. Two common measures derived from brain MRI scans are the brain parenchymal fraction (BPF) and T2 hyperintense lesion volume (T2LV), and these measures are routinely assessed longitudinally in clinical trials and observational studies. When measuring each outcome longitudinally, observed changes may be potentially confounded by variability in MRI acquisition parameters between scans. In order to accurately model longitudinal change, the acquisition parameters should thus be considered in statistical models. In this paper, several models for including protocol as well as individual MRI acquisition parameters in linear mixed models were compared using a large dataset of 3453 longitudinal MRI scans from 1341 subjects enrolled in the CLIMB study, and model fit indices were compared across the models. The model that best explained the variance in BPF data was a random intercept and random slope with protocol specific residual variance along with the following fixed-effects: baseline age, baseline disease duration, protocol and study time. The model that best explained the variance in T2LV was a random intercept and random slope along with the following fixed-effects: baseline age, baseline disease duration, protocol and study time. In light of these findings, future studies pertaining to BPF and T2LV outcomes should carefully account for the protocol factors within longitudinal models to ensure that the disease trajectory of MS subjects can be assessed more accurately. PMID:26199872
ERIC Educational Resources Information Center
Walkiewicz, T. A.; Newby, N. D., Jr.
1972-01-01
A discussion of linear collisions between two or three objects is related to a junior-level course in analytical mechanics. The theoretical discussion uses a geometrical approach that treats elastic and inelastic collisions from a unified point of view. Experiments with a linear air track are described. (Author/TS)
McKenzie, K.R.
1959-07-01
An electrode support which permits accurate alignment and adjustment of the electrode in a plurality of planes and about a plurality of axes in a calutron is described. The support will align the slits in the electrode with the slits of an ionizing chamber so as to provide for the egress of ions. The support comprises an insulator, a leveling plate carried by the insulator and having diametrically opposed attaching screws screwed to the plate and the insulator and diametrically opposed adjusting screws for bearing against the insulator, and an electrode associated with the plate for adjustment therewith.
Kautter, John; Pope, Gregory C.
2004-01-01
The authors document the development of the CMS frailty adjustment model, a Medicare payment approach that adjusts payments to a Medicare managed care organization (MCO) according to the functional impairment of its community-residing enrollees. Beginning in 2004, this approach is being applied to certain organizations, such as Program of All-Inclusive Care for the Elderly (PACE), that specialize in providing care to the community-residing frail elderly. In the future, frailty adjustment could be extended to more Medicare managed care organizations. PMID:25372243
Article mounting and position adjustment stage
Cutburth, Ronald W.; Silva, Leonard L.
1988-01-01
An improved adjustment and mounting stage of the type used for the detection of laser beams is disclosed. A ring sensor holder has locating pins on a first side thereof which are positioned within a linear keyway in a surrounding housing for permitting reciprocal movement of the ring along the keyway. A rotatable ring gear is positioned within the housing on the other side of the ring from the linear keyway and includes an oval keyway which drives the ring along the linear keyway upon rotation of the gear. Motor-driven single-stage and dual (x, y) stage adjustment systems are disclosed which are of compact construction and include a large laser transmission hole.
Article mounting and position adjustment stage
Cutburth, R.W.; Silva, L.L.
1988-05-10
An improved adjustment and mounting stage of the type used for the detection of laser beams is disclosed. A ring sensor holder has locating pins on a first side thereof which are positioned within a linear keyway in a surrounding housing for permitting reciprocal movement of the ring along the keyway. A rotatable ring gear is positioned within the housing on the other side of the ring from the linear keyway and includes an oval keyway which drives the ring along the linear keyway upon rotation of the gear. Motor-driven single-stage and dual (x, y) stage adjustment systems are disclosed which are of compact construction and include a large laser transmission hole. 6 figs.
Launch-rated kinematic mirror mount with six-degree-of-freedom adjustments
NASA Astrophysics Data System (ADS)
Sawyer, Kevin A.; Hurley, Barbara N.; Brindos, Raymond R.; Wong, James
1999-09-01
A kinematic, fully adjustable, six degree-of-freedom mirror mount has been developed for a space-based optical system. The optics vary in size from five inches to 10-inches and weigh up to 1.75 Kg. Many of the optics require multiple degrees-of-freedom for alignment and all elements need to be held to micron tolerances during orbit. The mount design described herein provides three-axis linear motions of at least three millimeters and multiple degrees of tilt. Each mount weighs approximately the same as its optic and exhibits gravity deflections less than .0002 radian. Natural frequencies for even the largest mirror mounts in the system are greater than 100 Hz. A unique feature of the mount design is the ability to easily adjust the mirror from behind without the need for complex jigs or tooling. The mirror mount is entirely self contained and is mechanically locked after final adjustments are made. A motion algorithm based on hexapod simulator control laws has been adopted to calculate the leg adjustments required to perform the mirror motions of tip, tilt, yaw, focus, and the two lateral shifts.
Yao, Ming; Ma, Li; Duchoslav, Eva; Zhu, Mingshe
2009-06-01
Multiple ion monitoring (MIM)-dependent acquisition with a triple quadrupole-linear ion trap mass spectrometer (Q-trap) was previously developed for drug metabolite profiling. In the analysis, multiple predicted metabolite ions are monitored in both Q1 and Q3 regardless of their fragmentations. The collision energy in Q2 is set to a low value to minimize fragmentation. Once an expected metabolite is detected by MIM, enhanced product ion (EPI) spectral acquisition of the metabolite is triggered. To analyze in vitro metabolites, MIM-EPI retains the sensitivity and selectivity similar to that of multiple reaction monitoring (MRM)-EPI in the analysis of in vitro metabolites. Here we present an improved approach utilizing MIM-EPI for data acquisition and multiple data mining techniques for detection of metabolite ions and recovery of their MS/MS spectra. The postacquisition data processing tools included extracted ion chromatographic analysis, product ion filtering and neutral loss filtering. The effectiveness of this approach was evaluated by analyzing oxidative metabolites of indinavir and glutathione (GSH) conjugates of clozapine and 4-ethylphenol in liver microsome incubations. Results showed that the MIM-EPI-based data mining approach allowed for comprehensive detection of metabolites based on predicted protonated molecules, product ions or neutral losses without predetermination of the parent drug MS/MS spectra. Additionally, it enabled metabolite detection and MS/MS acquisition in a single injection. This approach is potentially useful in high-throughout screening of metabolic soft spots and reactive metabolites at the drug discovery stage. PMID:19418486
Remotely Adjustable Hydraulic Pump
NASA Technical Reports Server (NTRS)
Kouns, H. H.; Gardner, L. D.
1987-01-01
Outlet pressure adjusted to match varying loads. Electrohydraulic servo has positioned sleeve in leftmost position, adjusting outlet pressure to maximum value. Sleeve in equilibrium position, with control land covering control port. For lowest pressure setting, sleeve shifted toward right by increased pressure on sleeve shoulder from servovalve. Pump used in aircraft and robots, where hydraulic actuators repeatedly turned on and off, changing pump load frequently and over wide range.
Herath, Sanvidha C. K.; Pathirana, Pubudu N.
2013-01-01
This paper investigates the linear separation requirements for Angle-of-Arrival (AoA) and range sensors, in order to achieve the optimal performance in estimating the position of a target from multiple and typically noisy sensor measurements. We analyse the sensor-target geometry in terms of the Cramer–Rao inequality and the corresponding Fisher information matrix, in order to characterize localization performance with respect to the linear spatial distribution of sensors. Here in this paper, we consider both fixed and adjustable linear sensor arrays. PMID:24036585
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M
2013-01-01
In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature