Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Nutrient Pumping/Advection by Propagating Rossby Waves in the Kuroshio Extension
2010-01-01
sea-elevation changes or SLA variance levels are a maximum as eddies and meanders cross a mean route. This boundary in terms of Chl- a levels (lower...and elevated Chl- a levels ) is south of the KE jet. Kuroshio Extension meanders and rings carry different water types across a mean Kuroshio Extension...Fig. 5A). The ring or eddy currents may also redistribute the surface Chl- a levels , drawing out plumes of locally increased Chl-a from regions of
Trunk Muscle Attributes are Associated with Balance and Mobility in Older Adults: A Pilot Study
Suri, Pradeep; Kiely, Dan K.; Leveille, Suzanne G.; Frontera, Walter R.; Bean, Jonathan. F.
2010-01-01
Objective To determine if trunk muscle attributes are associated with balance and mobility performance among mobility-limited older adults. Design Cross-sectional analysis of data from a randomized clinical trial. Setting Outpatient rehabilitation research center. Participants Community-dwelling older adults (N=70; mean age 75.9 y) with mobility limitations as defined by the Short Physical Performance Battery (SPPB). Methods Independent variables included physiologic measures of trunk extension strength, trunk flexion strength, trunk extension endurance, trunk extension endurance and leg press strength. All measures were well tolerated by the study subjects without the occurrence of any associated injuries or adverse events. The association of each physiologic measure with each outcome was examined, using separate multivariate models to calculate the partial variance (R2) of each trunk and extremity measure. Main Outcome Measurements Balance measured by the Berg Balance Scale (BBS) and Unipedal Stance Test (UST), and mobility performance as measured by the SPPB. Results Trunk extension endurance (partial R2=.14, p=.02), and leg press strength (partial R2=.14, p=.003) accounted for the greatest amount of the variance in SPPB performance. Trunk extension endurance (partial R2=.17, p=.007), accounted for the greatest amount of the variance in BBS performance. Trunk extension strength (R2=.09, p=.03), accounted for the greatest amount of the variance in UST performance. The variance explained by trunk extension endurance equaled or exceeded the variance explained by limb strength across all three performance outcomes. Conclusions Trunk endurance and strength can be safely measured in mobility-limited older adults, and are associated with both balance and mobility performance. Trunk endurance and trunk strength are physiologic attributes worthy of targeting in the rehabilitative care of mobility-limited older adults. PMID:19854420
Trunk muscle attributes are associated with balance and mobility in older adults: a pilot study.
Suri, Pradeep; Kiely, Dan K; Leveille, Suzanne G; Frontera, Walter R; Bean, Jonathan F
2009-10-01
To determine whether trunk muscle attributes are associated with balance and mobility performance among mobility-limited older adults. Cross-sectional analysis of data from a randomized clinical trial. Outpatient rehabilitation research center. Community-dwelling older adults (N = 70; mean age 75.9 years) with mobility limitations as defined by the Short Physical Performance Battery (SPPB). Independent variables included physiologic measures of trunk extension strength, trunk flexion strength, trunk extension endurance, trunk extension endurance, and leg press strength. All measures were well tolerated by the study subjects without the occurrence of any associated injuries or adverse events. The association of each physiologic measure with each outcome was examined by the use of separate multivariate models to calculate the partial variance (R(2)) of each trunk and extremity measure. Balance measured by the Berg Balance Scale and Unipedal Stance Test and mobility performance as measured by the SPPB. Trunk extension endurance (partial R(2) = .14, P = .02), and leg press strength (partial R(2) = .14, P = .003) accounted for the greatest amount of the variance in SPPB performance. Trunk extension endurance (partial R(2) = .17, P = .007), accounted for the greatest amount of the variance in BBS performance. Trunk extension strength (R(2) = .09, P = .03), accounted for the greatest amount of the variance in UST performance. The variance explained by trunk extension endurance equaled or exceeded the variance explained by limb strength across all three performance outcomes. Trunk endurance and strength can be safely measured in mobility-limited older adults and are associated with both balance and mobility performance. Trunk endurance and trunk strength are physiologic attributes worthy of targeting in the rehabilitative care of mobility-limited older adults.
Design and analysis of three-arm trials with negative binomially distributed endpoints.
Mütze, Tobias; Munk, Axel; Friede, Tim
2016-02-20
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Streamflow record extension using power transformations and application to sediment transport
NASA Astrophysics Data System (ADS)
Moog, Douglas B.; Whiting, Peter J.; Thomas, Robert B.
1999-01-01
To obtain a representative set of flow rates for a stream, it is often desirable to fill in missing data or extend measurements to a longer time period by correlation to a nearby gage with a longer record. Linear least squares regression of the logarithms of the flows is a traditional and still common technique. However, its purpose is to generate optimal estimates of each day's discharge, rather than the population of discharges, for which it tends to underestimate variance. Maintenance-of-variance-extension (MOVE) equations [Hirsch, 1982] were developed to correct this bias. This study replaces the logarithmic transformation by the more general Box-Cox scaled power transformation, generating a more linear, constant-variance relationship for the MOVE extension. Combining the Box-Cox transformation with the MOVE extension is shown to improve accuracy in estimating order statistics of flow rate, particularly for the nonextreme discharges which generally govern cumulative transport over time. This advantage is illustrated by prediction of cumulative fractions of total bed load transport.
Estimation of population size using open capture-recapture models
McDonald, T.L.; Amstrup, Steven C.
2001-01-01
One of the most important needs for wildlife managers is an accurate estimate of population size. Yet, for many species, including most marine species and large mammals, accurate and precise estimation of numbers is one of the most difficult of all research challenges. Open-population capture-recapture models have proven useful in many situations to estimate survival probabilities but typically have not been used to estimate population size. We show that open-population models can be used to estimate population size by developing a Horvitz-Thompson-type estimate of population size and an estimator of its variance. Our population size estimate keys on the probability of capture at each trap occasion and therefore is quite general and can be made a function of external covariates measured during the study. Here we define the estimator and investigate its bias, variance, and variance estimator via computer simulation. Computer simulations make extensive use of real data taken from a study of polar bears (Ursus maritimus) in the Beaufort Sea. The population size estimator is shown to be useful because it was negligibly biased in all situations studied. The variance estimator is shown to be useful in all situations, but caution is warranted in cases of extreme capture heterogeneity.
ERIC Educational Resources Information Center
Penfield, Randall D.; Algina, James
2006-01-01
One approach to measuring unsigned differential test functioning is to estimate the variance of the differential item functioning (DIF) effect across the items of the test. This article proposes two estimators of the DIF effect variance for tests containing dichotomous and polytomous items. The proposed estimators are direct extensions of the…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-15
... Public Service Company; Notice of Application for Temporary Variance of License Article 403 and...: Extension of temporary variance of license article 403. b. Project No: 12514-056. c. Date Filed: November 28... submit brief comments up to 6,000 characters, without prior registration, using the eComment system at...
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Xu, Hang; Merryweather, Andrew; Bloswick, Donald; Mao, Qi; Wang, Tong
2015-01-01
Marker placement can be a significant source of error in biomechanical studies of human movement. The toe marker placement error is amplified by footwear since the toe marker placement on the shoe only relies on an approximation of underlying anatomical landmarks. Three total knee replacement subjects were recruited and three self-speed gait trials per subject were collected. The height variation between toe and heel markers of four types of footwear was evaluated from the results of joint kinematics and muscle forces using OpenSim. The reference condition was considered as the same vertical height of toe and heel markers. The results showed that the residual variances for joint kinematics had an approximately linear relationship with toe marker placement error for lower limb joints. Ankle dorsiflexion/plantarflexion is most sensitive to toe marker placement error. The influence of toe marker placement error is generally larger for hip flexion/extension and rotation than hip abduction/adduction and knee flexion/extension. The muscle forces responded to the residual variance of joint kinematics to various degrees based on the muscle function for specific joint kinematics. This study demonstrates the importance of evaluating marker error for joint kinematics and muscle forces when explaining relative clinical gait analysis and treatment intervention.
Testing Interaction Effects without Discarding Variance.
ERIC Educational Resources Information Center
Lopez, Kay A.
Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…
Distribution of distances between DNA barcode labels in nanochannels close to the persistence length
NASA Astrophysics Data System (ADS)
Reinhart, Wesley F.; Reifenberger, Jeff G.; Gupta, Damini; Muralidhar, Abhiram; Sheats, Julian; Cao, Han; Dorfman, Kevin D.
2015-02-01
We obtained experimental extension data for barcoded E. coli genomic DNA molecules confined in nanochannels from 40 nm to 51 nm in width. The resulting data set consists of 1 627 779 measurements of the distance between fluorescent probes on 25 407 individual molecules. The probability density for the extension between labels is negatively skewed, and the magnitude of the skewness is relatively insensitive to the distance between labels. The two Odijk theories for DNA confinement bracket the mean extension and its variance, consistent with the scaling arguments underlying the theories. We also find that a harmonic approximation to the free energy, obtained directly from the probability density for the distance between barcode labels, leads to substantial quantitative error in the variance of the extension data. These results suggest that a theory for DNA confinement in such channels must account for the anharmonic nature of the free energy as a function of chain extension.
Non-Gaussian Distribution of DNA Barcode Extension In Nanochannels Using High-throughput Imaging
NASA Astrophysics Data System (ADS)
Sheats, Julian; Reinhart, Wesley; Reifenberger, Jeff; Gupta, Damini; Muralidhar, Abhiram; Cao, Han; Dorfman, Kevin
2015-03-01
We present experimental data for the extension of internal segments of highly confined DNA using a high-throughput experimental setup. Barcode-labeled E. coli genomic DNA molecules were imaged at a high areal density in square nanochannels with sizes ranging from 40 nm to 51 nm in width. Over 25,000 molecules were used to obtain more than 1,000,000 measurements for genomic distances between 2,500 bp and 100,000 bp. The distribution of extensions has positive excess kurtosis and is skew left due to weak backfolding in the channel. As a result, the two Odijk theories for the chain extension and variance bracket the experimental data. We compared to predictions of a harmonic approximation for the confinement free energy and show that it produces a substantial error in the variance. These results suggest an inherent error associated with any statistical analysis of barcoded DNA that relies on harmonic models for chain extension. Present address: Department of Chemical and Biological Engineering, Princeton University.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-19
..., Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, VA 22209-3939. (4) Hand Delivery or Courier: MSHA, Office of Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350... CONTACT: Mario Distasio, Chief of the Economic Analysis Division, Office of Standards, Regulations, and...
Ahlborn, W; Tuz, H J; Uberla, K
1990-03-01
In cohort studies the Mantel-Haenszel estimator ORMH is computed from sample data and is used as a point estimator of relative risk. Test-based confidence intervals are estimated with the help of the asymptotic chi-squared distributed MH-statistic chi 2MHS. The Mantel-extension-chi-squared is used as a test statistic for a dose-response relationship. Both test statistics--the Mantel-Haenszel-chi as well as the Mantel-extension-chi--assume homogeneity of risk across strata, which is rarely present. Also an extended nonparametric statistic, proposed by Terpstra, which is based on the Mann-Whitney-statistics assumes homogeneity of risk across strata. We have earlier defined four risk measures RRkj (k = 1,2,...,4) in the population and considered their estimates and the corresponding asymptotic distributions. In order to overcome the homogeneity assumption we use the delta-method to get "test-based" confidence intervals. Because the four risk measures RRkj are presented as functions of four weights gik we give, consequently, the asymptotic variances of these risk estimators also as functions of the weights gik in a closed form. Approximations to these variances are given. For testing a dose-response relationship we propose a new class of chi 2(1)-distributed global measures Gk and the corresponding global chi 2-test. In contrast to the Mantel-extension-chi homogeneity of risk across strata must not be assumed. These global test statistics are of the Wald type for composite hypotheses.(ABSTRACT TRUNCATED AT 250 WORDS)
Analysis of variance in investigations on anisotropy of Cu ore deposits
NASA Astrophysics Data System (ADS)
Namysłowska-Wilczyńska, B.
1986-10-01
The problem of variability of copper grades and ore thickness in the Lubin copper ore deposit in southwestern Poland is presented. Results of statistical analysis of variations of ledge parameters carried out for three exploited regions of the mine, representing different types of lithological profile show considerable differences. Variability of copper grades occurs in vertical profiles, as well as on extension of field (the copper-bearing series). Against the background of a complex, well-substantiated description of the spatial variability in the Lubin deposit, a methodology is presented that has been applied for the determination of homogeneous ore blocks. The method is a two-factorial (cross) analysis of variance with the special tests of Tukey, Scheffe and Duncan. Blocks of homogeneous sandstone ore have dimensions of up to 160,000 m2 and 60,000 m2 in the case of the Cu content parameter and 200,000 m2 and 10,000 m2 for the thickness parameter.
A comparison of four streamflow record extension techniques
Hirsch, Robert M.
1982-01-01
One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., ‘line of organic correlation,’ ‘reduced major axis,’ ‘unique solution,’ and ‘equivalence line.’ The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.
A Comparison of Four Streamflow Record Extension Techniques
NASA Astrophysics Data System (ADS)
Hirsch, Robert M.
1982-08-01
One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., `line of organic correlation,' `reduced major axis,' `unique solution,' and `equivalence line.' The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.
Comparing the Effectiveness of SPSS and EduG Using Different Designs for Generalizability Theory
ERIC Educational Resources Information Center
Teker, Gulsen Tasdelen; Guler, Nese; Uyanik, Gulden Kaya
2015-01-01
Generalizability theory (G theory) provides a broad conceptual framework for social sciences such as psychology and education, and a comprehensive construct for numerous measurement events by using analysis of variance, a strong statistical method. G theory, as an extension of both classical test theory and analysis of variance, is a model which…
ERIC Educational Resources Information Center
Proger, Barton B.; And Others
Many researchers assume that unequal cell frequencies in analysis of variance (ANOVA) designs result from poor planning. However, there are several valid reasons why one might have to analyze an unequal-n data matrix. The present study reviewed four categories of methods for treating unequal-n matrices by ANOVA: (a) unaltered data (least-squares…
NASA Astrophysics Data System (ADS)
Reynders, Edwin P. B.; Langley, Robin S.
2018-08-01
The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.
Mechanical factors relate to pain in knee osteoarthritis.
Maly, Monica R; Costigan, Patrick A; Olney, Sandra J
2008-07-01
Pain experienced by people with knee osteoarthritis is related to psychosocial factors and damage to articular tissues and/or the pain pathway itself. Mechanical factors have been speculated to trigger this pain experience; yet mechanics have not been identified as a source of pain in this population. The purpose of this study was to identify whether mechanics could explain variance in pain intensity in people with knee osteoarthritis. Data from 53 participants with physician-diagnosed knee osteoarthritis (mean age=68.5 years; standard deviation=8.6 years) were analyzed. Pain intensity was reported on the Western Ontario and McMaster Universities Osteoarthritis Index. Mechanical measures included weight-bearing varus-valgus alignment, body mass index and isokinetic quadriceps torque. Gait analysis captured the range of adduction-abduction angle, range of flexion-extension angle and external knee adduction moment during level walking. Pain intensity was significantly related to the dynamic range of flexion-extension during gait and body mass index. A total of 29% of the variance in pain intensity was explained by mechanical variables. The range of flexion-extension explained 18% of variance in pain intensity. Body mass index added 11% to the model. The knee adduction moment was unrelated to pain intensity. The findings support that mechanical factors are related to knee osteoarthritis pain. Because limitations in flexion-extension range of motion and body size are modifiable factors, future research could examine whether interventions targeting these mechanics would facilitate pain management.
ERIC Educational Resources Information Center
Brockmann, Frank
2011-01-01
State testing programs today are more extensive than ever, and their results are required to serve more purposes and high-stakes decisions than one might have imagined. Assessment results are used to hold schools, districts, and states accountable for student performance and to help guide a multitude of important decisions. This report describes…
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Studying Variance in the Galactic Ultra-compact Binary Population
NASA Astrophysics Data System (ADS)
Larson, Shane L.; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
2012-04-30
tool that provides a means of balancing capability development against cost and interdependent risks through the use of modern portfolio theory ...Focardi, 2007; Tutuncu & Cornuejols, 2007) that are extensions of modern portfolio and control theory . The reformulation allows for possible changes...Acquisition: Wave Model context • An Investment Portfolio Approach – Mean Variance Approach – Mean - Variance : A Robust Version • Concept
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
Power law analysis of the human microbiome.
Ma, Zhanshan Sam
2015-11-01
Taylor's (1961, Nature, 189:732) power law, a power function (V = am(b) ) describing the scaling relationship between the mean and variance of population abundances of organisms, has been found to govern the population abundance distributions of single species in both space and time in macroecology. It is regarded as one of few generalities in ecology, and its parameter b has been widely applied to characterize spatial aggregation (i.e. heterogeneity) and temporal stability of single-species populations. Here, we test its applicability to bacterial populations in the human microbiome using extensive data sets generated by the US-NIH Human Microbiome Project (HMP). We further propose extending Taylor's power law from the population to the community level, and accordingly introduce four types of power-law extensions (PLEs): type I PLE for community spatial aggregation (heterogeneity), type II PLE for community temporal aggregation (stability), type III PLE for mixed-species population spatial aggregation (heterogeneity) and type IV PLE for mixed-species population temporal aggregation (stability). Our results show that fittings to the four PLEs with HMP data were statistically extremely significant and their parameters are ecologically sound, hence confirming the validity of the power law at both the population and community levels. These findings not only provide a powerful tool to characterize the aggregations of population and community in both time and space, offering important insights into community heterogeneity in space and/or stability in time, but also underscore the three general properties of power laws (scale invariance, no average and universality) and their specific manifestations in our four PLEs. © 2015 John Wiley & Sons Ltd.
Kang, Le; Chen, Weijie; Petrick, Nicholas A.; Gallas, Brandon D.
2014-01-01
The area under the receiver operating characteristic (ROC) curve (AUC) is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of AUC, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. PMID:25399736
Analytical pricing formulas for hybrid variance swaps with regime-switching
NASA Astrophysics Data System (ADS)
Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun
2017-11-01
The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.
Isokinetic Extension Strength Is Associated With Single-Leg Vertical Jump Height.
Fischer, Felix; Blank, Cornelia; Dünnwald, Tobias; Gföller, Peter; Herbst, Elmar; Hoser, Christian; Fink, Christian
2017-11-01
Isokinetic strength testing is an important tool in the evaluation of the physical capacities of athletes as well as for decision making regarding return to sports after anterior cruciate ligament (ACL) reconstruction in both athletes and the lay population. However, isokinetic testing is time consuming and requires special testing equipment. A single-jump test, regardless of leg dominance, may provide information regarding knee extension strength through the use of correlation analysis of jump height and peak torque of isokinetic muscle strength. Cross-sectional study; Level of evidence, 3. A total of 169 patients who underwent ACL reconstruction were included in this study. Isokinetic testing was performed on the injured and noninjured legs. Additionally, a single-leg countermovement jump was performed to assess jump height using a jump accelerometer sensor. Extension strength values were used to assess the association between isokinetic muscle strength and jump height. The sample consisted of 60 female (mean age, 20.8 ± 8.3 years; mean weight, 61.7 ± 6.5 kg; mean height, 167.7 ± 5.3 cm) and 109 male (mean age, 23.2 ± 7.7 years; mean weight, 74.6 ± 10.2 kg; mean height, 179.9 ± 6.9 cm) patients. Bivariate correlation analysis showed an association ( r = 0.56, P < .001) between jump height and isokinetic extension strength on the noninvolved side as well as an association ( r = 0.52, P < .001) for the involved side. Regression analysis showed that in addition to jump height (beta = 0.49, P < .001), sex (beta = -0.17, P = .008) and body mass index (beta = 0.37, P < .001) affected isokinetic strength. The final model explained 51.1% of the variance in isokinetic muscle strength, with jump height having the strongest impact (beta = 0.49, P < .001) and explaining 31.5% of the variance. Initial analysis showed a strong association between isokinetic strength and jump height. The study population encompassed various backgrounds, skill levels, and activity profiles, which might have affected the outcome. Even after controlling for age and sex, isokinetic strength was still moderately associated with jump height. Therefore, the jump technique and type of sport should be considered in future research.
Jongerling, Joran; Laurenceau, Jean-Philippe; Hamaker, Ellen L
2015-01-01
In this article we consider a multilevel first-order autoregressive [AR(1)] model with random intercepts, random autoregression, and random innovation variance (i.e., the level 1 residual variance). Including random innovation variance is an important extension of the multilevel AR(1) model for two reasons. First, between-person differences in innovation variance are important from a substantive point of view, in that they capture differences in sensitivity and/or exposure to unmeasured internal and external factors that influence the process. Second, using simulation methods we show that modeling the innovation variance as fixed across individuals, when it should be modeled as a random effect, leads to biased parameter estimates. Additionally, we use simulation methods to compare maximum likelihood estimation to Bayesian estimation of the multilevel AR(1) model and investigate the trade-off between the number of individuals and the number of time points. We provide an empirical illustration by applying the extended multilevel AR(1) model to daily positive affect ratings from 89 married women over the course of 42 consecutive days.
Head repositioning accuracy in patients with whiplash-associated disorders.
Feipel, Veronique; Salvia, Patrick; Klein, Helene; Rooze, Marcel
2006-01-15
Controlled study, measuring head repositioning error (HRE) using an electrogoniometric device. To compare HRE in neutral position, axial rotation and complex postures of patients with whiplash-associated disorders (WAD) to that of control subjects. The presence of kinesthetic alterations in patients with WAD is controversial. In 26 control subjects and 29 patients with WAD (aged 22-74 years), head kinematics was sampled using a 3-dimensional electrogoniometer mounted using a harness and a helmet. All tasks were realized in seated position. The repositioning tasks included neutral repositioning after maximal flexion-extension, eyes open and blindfolded, repositioning at 50 degrees of axial rotation, and repositioning at 50 degrees of axial rotation combined to 20 degrees of ipsilateral bending. The flexion-extension, ipsilateral bending, and axial rotation components of HRE were considered. A multiple-way repeated-measures analysis of variance was used to compare tasks and groups. The WAD group displayed a reduced flexion-extension range (P = 1.9 x 10(-4)), and larger HRE during flexion-extension and repositioning tasks (P = 0.009) than controls. Neither group nor task affected maximal motion velocity. Neutral HRE of the flexion-extension component was larger in blindfolded condition (P = 0.03). Ipsilateral bending and axial rotation HRE components were smaller than the flexion-extension component (P = 7.1 x 10(-23)). For pure rotation repositioning, axial rotation HRE was significantly larger than flexion-extension and ipsilateral bending repositioning error (P = 3.0 x 10(-23)). Ipsilateral bending component of HRE was significantly larger combined tasks than for pure rotation tasks (P = 0.004). In patients with WAD, range of motion and head repositioning accuracy were reduced. However, the differences were small. Vision suppression and task type influenced HRE.
Variance partitioning of stream diatom, fish, and invertebrate indicators of biological condition
Zuellig, Robert E.; Carlisle, Daren M.; Meador, Michael R.; Potapova, Marina
2012-01-01
Stream indicators used to make assessments of biological condition are influenced by many possible sources of variability. To examine this issue, we used multiple-year and multiple-reach diatom, fish, and invertebrate data collected from 20 least-disturbed and 46 developed stream segments between 1993 and 2004 as part of the US Geological Survey National Water Quality Assessment Program. We used a variance-component model to summarize the relative and absolute magnitude of 4 variance components (among-site, among-year, site × year interaction, and residual) in indicator values (observed/expected ratio [O/E] and regional multimetric indices [MMI]) among assemblages and between basin types (least-disturbed and developed). We used multiple-reach samples to evaluate discordance in site assessments of biological condition caused by sampling variability. Overall, patterns in variance partitioning were similar among assemblages and basin types with one exception. Among-site variance dominated the relative contribution to the total variance (64–80% of total variance), residual variance (sampling variance) accounted for more variability (8–26%) than interaction variance (5–12%), and among-year variance was always negligible (0–0.2%). The exception to this general pattern was for invertebrates at least-disturbed sites where variability in O/E indicators was partitioned between among-site and residual (sampling) variance (among-site = 36%, residual = 64%). This pattern was not observed for fish and diatom indicators (O/E and regional MMI). We suspect that unexplained sampling variability is what largely remained after the invertebrate indicators (O/E predictive models) had accounted for environmental differences among least-disturbed sites. The influence of sampling variability on discordance of within-site assessments was assemblage or basin-type specific. Discordance among assessments was nearly 2× greater in developed basins (29–31%) than in least-disturbed sites (15–16%) for invertebrates and diatoms, whereas discordance among assessments based on fish did not differ between basin types (least-disturbed = 16%, developed = 17%). Assessments made using invertebrate and diatom indicators from a single reach disagreed with other samples collected within the same stream segment nearly ⅓ of the time in developed basins, compared to ⅙ for all other cases.
Anthropometry as a predictor of high speed performance.
Caruso, J F; Ramey, E; Hastings, L P; Monda, J K; Coday, M A; McLagan, J; Drummond, J
2009-07-01
To assess anthropometry as a predictor of high-speed performance, subjects performed four seated knee- and hip-extension workouts with their left leg on an inertial exercise trainer (Impulse Technologies, Newnan GA). Workouts, done exclusively in either the tonic or phasic contractile mode, entailed two one-minute sets separated by a 90-second rest period and yielded three performance variables: peak force, average force and work. Subjects provided the following anthropometric data: height, weight, body mass index, as well as total, upper and lower left leg lengths. Via multiple regression, anthropometry attempted to predict the variance per performance variable. Anthropometry explained a modest (R2=0.27-0.43) yet significant degree of variance from inertial exercise trainer workouts. Anthropometry was a better predictor of peak force variance from phasic workouts, while it accounted for a significant degree of average force and work variance solely from tonic workouts. Future research should identify variables that account for the unexplained variance from high-speed exercise performance.
Undergraduate Navigator Training Attrition Study
1975-11-01
stabilization. The Masculinity- Feminity Scale (SVIB), significant at the .05 level, contributed 1.73% to the predicted variance. High scores (those...8217 a iiiftihlilfft-tMJ ^^mm^mmmwm^mmmmm mmmmm Do you have extensive experience in athletic competition? If so, what sport (s) and what kind of...machinery? For example, farm equipment, construction equipment. Do you have extensive experience in athletic competition? If so, what sport (s) and what
Identifying the Source of Misfit in Item Response Theory Models.
Liu, Yang; Maydeu-Olivares, Alberto
2014-01-01
When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.
Kang, Le; Chen, Weijie; Petrick, Nicholas A; Gallas, Brandon D
2015-02-20
The area under the receiver operating characteristic curve is often used as a summary index of the diagnostic ability in evaluating biomarkers when the clinical outcome (truth) is binary. When the clinical outcome is right-censored survival time, the C index, motivated as an extension of area under the receiver operating characteristic curve, has been proposed by Harrell as a measure of concordance between a predictive biomarker and the right-censored survival outcome. In this work, we investigate methods for statistical comparison of two diagnostic or predictive systems, of which they could either be two biomarkers or two fixed algorithms, in terms of their C indices. We adopt a U-statistics-based C estimator that is asymptotically normal and develop a nonparametric analytical approach to estimate the variance of the C estimator and the covariance of two C estimators. A z-score test is then constructed to compare the two C indices. We validate our one-shot nonparametric method via simulation studies in terms of the type I error rate and power. We also compare our one-shot method with resampling methods including the jackknife and the bootstrap. Simulation results show that the proposed one-shot method provides almost unbiased variance estimations and has satisfactory type I error control and power. Finally, we illustrate the use of the proposed method with an example from the Framingham Heart Study. Copyright © 2014 John Wiley & Sons, Ltd.
Near-wall modelling of compressible turbulent flows
NASA Technical Reports Server (NTRS)
So, Ronald M. C.
1990-01-01
Work was carried out to formulate near-wall models for the equations governing the transport of the temperature-variance and its dissipation rate. With these equations properly modeled, a foundation is laid for their extension together with the heat-flux equations to compressible flows. This extension is carried out in a manner similar to that used to extend the incompressible near-wall Reynolds-stress models to compressible flows. The methodology used to accomplish the extension of the near-wall Reynolds-stress models is examined and the actual extension of the models for the Reynolds-stress equations and the near-wall dissipation-rate equation to compressible flows is given. Then the formulation of the near-wall models for the equations governing the transport of the temperature variance and its dissipation rate is discussed. Finally, a sample calculation of a flat plate compressible turbulent boundary-layer flow with adiabatic wall boundary condition and a free-stream Mach number of 2.5 using a two-equation near-wall closure is presented. The results show that the near-wall two-equation closure formulated for compressible flows is quite valid and the calculated properties are in good agreement with measurements. Furthermore, the near-wall behavior of the turbulence statistics and structure parameters is consistent with that found in incompressible flows.
On the impact of relatedness on SNP association analysis.
Gross, Arnd; Tönjes, Anke; Scholz, Markus
2017-12-06
When testing for SNP (single nucleotide polymorphism) associations in related individuals, observations are not independent. Simple linear regression assuming independent normally distributed residuals results in an increased type I error and the power of the test is also affected in a more complicate manner. Inflation of type I error is often successfully corrected by genomic control. However, this reduces the power of the test when relatedness is of concern. In the present paper, we derive explicit formulae to investigate how heritability and strength of relatedness contribute to variance inflation of the effect estimate of the linear model. Further, we study the consequences of variance inflation on hypothesis testing and compare the results with those of genomic control correction. We apply the developed theory to the publicly available HapMap trio data (N=129), the Sorbs (a self-contained population with N=977 characterised by a cryptic relatedness structure) and synthetic family studies with different sample sizes (ranging from N=129 to N=999) and different degrees of relatedness. We derive explicit and easily to apply approximation formulae to estimate the impact of relatedness on the variance of the effect estimate of the linear regression model. Variance inflation increases with increasing heritability. Relatedness structure also impacts the degree of variance inflation as shown for example family structures. Variance inflation is smallest for HapMap trios, followed by a synthetic family study corresponding to the trio data but with larger sample size than HapMap. Next strongest inflation is observed for the Sorbs, and finally, for a synthetic family study with a more extreme relatedness structure but with similar sample size as the Sorbs. Type I error increases rapidly with increasing inflation. However, for smaller significance levels, power increases with increasing inflation while the opposite holds for larger significance levels. When genomic control is applied, type I error is preserved while power decreases rapidly with increasing variance inflation. Stronger relatedness as well as higher heritability result in increased variance of the effect estimate of simple linear regression analysis. While type I error rates are generally inflated, the behaviour of power is more complex since power can be increased or reduced in dependence on relatedness and the heritability of the phenotype. Genomic control cannot be recommended to deal with inflation due to relatedness. Although it preserves type I error, the loss in power can be considerable. We provide a simple formula for estimating variance inflation given the relatedness structure and the heritability of a trait of interest. As a rule of thumb, variance inflation below 1.05 does not require correction and simple linear regression analysis is still appropriate.
Biotic and abiotic dynamics of a high solid-state anaerobic digestion box-type container system.
Walter, Andreas; Probst, Maraike; Hinterberger, Stephan; Müller, Horst; Insam, Heribert
2016-03-01
A solid-state anaerobic digestion box-type container system for biomethane production was observed in 12 three-week batch fermentations. Reactor performance was monitored using physico-chemical analysis and the methanogenic community was identified using ANAEROCHIP-microarrays and quantitative PCR. A resilient community was found in all batches, despite variations in inoculum to substrate ratio, feedstock quality, and fluctuating reactor conditions. The consortia were dominated by mixotrophic Methanosarcina that were accompanied by hydrogenotrophic Methanobacterium, Methanoculleus, and Methanocorpusculum. The relationship between biotic and abiotic variables was investigated using bivariate correlation analysis and univariate analysis of variance. High amounts of biogas were produced in batches with high copy numbers of Methanosarcina. High copy numbers of Methanocorpusculum and extensive percolation, however, were found to negatively correlate with biogas production. Supporting these findings, a negative correlation was detected between Methanocorpusculum and Methanosarcina. Based on these results, this study suggests Methanosarcina as an indicator for well-functioning reactor performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
Effects of the forearm support band on wrist extensor muscle fatigue.
Knebel, P T; Avery, D W; Gebhardt, T L; Koppenhaver, S L; Allison, S C; Bryan, J M; Kelly, A
1999-11-01
A crossover experimental design with repeated measures. To determine whether the forearm support band alters wrist extensor muscle fatigue. Fatigue of the wrist extensor muscles is thought to be a contributing factor in the development of lateral epicondylitis. The forearm support band is purported to reduce or prevent symptoms of lateral epicondylitis but the mechanism of action is unknown. Fifty unimpaired subjects (36 men, 14 women; mean age = 29 +/- 6 years) were tested with and without a forearm support band before and after a fatiguing bout of exercise. Peak wrist extension isometric force, peak isometric grip force, and median power spectral frequency for wrist extensor electromyographic activity were measured before and after exercise and with and without the forearm support band. A 2 x 2 repeated measures multivariate analysis of variance was used to analyze the data, followed by univariate analysis of variance and Tukey's multiple comparison tests. Peak wrist extension isometric force, peak grip isometric force, and median power spectral frequency were all reduced after exercise. However, there was a significant reduction in peak grip isometric force and peak wrist extension isometric force values for the with-forearm support band condition (grip force 28%, wrist extension force 26%) compared to the without-forearm support band condition (grip force 18%, wrist extension force 15%). Wearing the forearm support band increased the rate of fatigue in unimpaired individuals. Our findings do not support the premise that wearing the forearm support band reduces muscle fatigue in the wrist extensors.
NASA Astrophysics Data System (ADS)
Burton, S. P.; Ferrare, R. A.; Vaughan, M.; Hostetler, C. A.; Rogers, R. R.; Hair, J. W.; Cook, A. L.; Harper, D. B.
2013-12-01
Knowledge of aerosol type is important for source attribution and for determining the magnitude and assessing the consequences of aerosol radiative forcing. The NASA Langley Research Center airborne High Spectral Resolution Lidar (HSRL-1) has acquired considerable datasets of both aerosol extensive parameters (e.g. aerosol optical depth) and intensive parameters (e.g. aerosol depolarization ratio, lidar ratio) that can be used to infer aerosol type. An aerosol classification methodology has been used extensively to classify HSRL-1 aerosol measurements of different aerosol types including dust, smoke, urban pollution, and marine aerosol. However, atmospheric aerosol is frequently not a single pure type, but instead occurs as a mixture of types, and this mixing affects the optical and radiative properties of the aerosol. Here we present a comprehensive and unified set of rules for characterizing external mixtures using several key aerosol intensive parameters: extinction-to-backscatter ratio (i.e. lidar ratio), backscatter color ratio, and depolarization ratio. Our mixing rules apply not just to the scalar values of aerosol intensive parameters, but to multi-dimensional normal distributions with variance in each measurement dimension. We illustrate the applicability of the mixing rules using examples of HSRL-1 data where mixing occurred between different aerosol types, including advected Saharan dust mixed with the marine boundary layer in the Caribbean Sea and locally generated dust mixed with urban pollution in the Mexico City surroundings. For each of these cases we infer a time-height cross section of mixing ratio along the flight track and we partition aerosol extinction into portions attributed to the two pure types. Since multiple aerosol intensive parameters are measured and included in these calculations, the techniques can also be used for cases without significant depolarization (unlike similar work by earlier researchers), and so a third example of a mixture of smoke plus marine aerosol is also explored.
NASA Astrophysics Data System (ADS)
Yu, Kai; Dong, Changming; King, Gregory P.
2017-06-01
We investigate mesoscale turbulence (10-1000 km) in the ocean winds over the Kuroshio Extension (28°N-40°N, 140°E-180°E) using the QuikSCAT data set (November 1999 to October 2009). We calculate the second (Djj) and third-order structure functions (Djjj) and the spatial variance (Vj) as a function of scale r (j=L,T denotes, respectively, the longitudinal (divergent) and transverse (vortical) component). The most interesting results of the analysis follow. Although both Vj>(r>) and Djj>(r>) measure the turbulent kinetic energy (TKE), we find that Vj>(r>) is the more robust measure. The spatial variance density (dVj/dr) has a broad peak near 450 km (close to the midlatitude Rossby radius of deformation). On interannual time scales, TKE correlates well with the El Niño 3.4 index. According to turbulence theory, the kinetic energy cascades downscale (upscale) if DLLL>(r>) (also skewness SL=DLLL/DLL3/2) is negative (positive). Our results for the Kuroshio Extension are consistent with a downscale cascade (indicating convergence dominates). Furthermore, classical turbulence theory predicts that SL=-0.3 and independent of r; however, we find SL varies strongly with r, from -4 at small scales to -0.3 at large scales. This nonclassical behavior implies strong-scale interaction, which we attribute to the rapid, and sometimes explosive, growth of storms in the region through baroclinic instability. Finally, we find that ST (a measure of cyclonic/anticyclonic asymmetry) is positive (cyclonic) and also varies strongly with r, from 4 at small scales to 0.5 at large scales. New turbulence models are needed to explain these results, and that will benefit Weather Prediction and climate modeling.
Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.
ERIC Educational Resources Information Center
Marcoulides, George A.
1995-01-01
A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)
Edwards, Rufus D; Smith, Kirk R; Zhang, Junfeng; Ma, Yuqing
2003-01-01
Residential energy use in developing countries has traditionally been associated with combustion devices of poor energy efficiency, which have been shown to produce substantial health-damaging pollution, contributing significantly to the global burden of disease, and greenhouse gas (GHG) emissions. Precision of these estimates in China has been hampered by limited data on stove use and fuel consumption in residences. In addition limited information is available on variability of emissions of pollutants from different stove/fuel combinations in typical use, as measurement of emission factors requires measurement of multiple chemical species in complex burn cycle tests. Such measurements are too costly and time consuming for application in conjunction with national surveys. Emissions of most of the major health-damaging pollutants (HDP) and many of the gases that contribute to GHG emissions from cooking stoves are the result of the significant portion of fuel carbon that is diverted to products of incomplete combustion (PIC) as a result of poor combustion efficiencies. The approximately linear increase in emissions of PIC with decreasing combustion efficiencies allows development of linear models to predict emissions of GHG and HDP intrinsically linked to CO2 and PIC production, and ultimately allows the prediction of global warming contributions from residential stove emissions. A comprehensive emissions database of three burn cycles of 23 typical fuel/stove combinations tested in a simulated village house in China has been used to develop models to predict emissions of HDP and global warming commitment (GWC) from cooking stoves in China, that rely on simple survey information on stove and fuel use that may be incorporated into national surveys. Stepwise regression models predicted 66% of the variance in global warming commitment (CO2, CO, CH4, NOx, TNMHC) per 1 MJ delivered energy due to emissions from these stoves if survey information on fuel type was available. Subsequently if stove type is known, stepwise regression models predicted 73% of the variance. Integrated assessment of policies to change stove or fuel type requires that implications for environmental impacts, energy efficiency, global warming and human exposures to HDP emissions can be evaluated. Frequently, this involves measurement of TSP or CO as the major HDPs. Incorporation of this information into models to predict GWC predicted 79% and 78% of the variance respectively. Clearly, however, the complexity of making multiple measurements in conjunction with a national survey would be both expensive and time consuming. Thus, models to predict HDP using simple survey information, and with measurement of either CO/CO2 or TSP/CO2 to predict emission factors for the other HDP have been derived. Stepwise regression models predicted 65% of the variance in emissions of total suspended particulate as grams of carbon (TSPC) per 1 MJ delivered if survey information on fuel and stove type was available and 74% if the CO/CO2 ratio was measured. Similarly stepwise regression models predicted 76% of the variance in COC emissions per MJ delivered with survey information on stove and fuel type and 85% if the TSPC/CO2 ratio was measured. Ultimately, with international agreements on emissions trading frameworks, similar models based on extensive databases of the fate of fuel carbon during combustion from representative household stoves would provide a mechanism for computing greenhouse credits in the residential sector as part of clean development mechanism frameworks and monitoring compliance to control regimes.
USDA-ARS?s Scientific Manuscript database
We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...
Goetschius, John; Hart, Joseph M
2016-01-01
When returning to physical activity, patients with a history of anterior cruciate ligament reconstruction (ACL-R) often experience limitations in knee-joint function that may be due to chronic impairments in quadriceps motor control. Assessment of knee-extension torque variability may demonstrate underlying impairments in quadriceps motor control in patients with a history of ACL-R. To identify differences in maximal isometric knee-extension torque variability between knees that have undergone ACL-R and healthy knees and to determine the relationship between knee-extension torque variability and self-reported knee function in patients with a history of ACL-R. Descriptive laboratory study. Laboratory. A total of 53 individuals with primary, unilateral ACL-R (age = 23.4 ± 4.9 years, height = 1.7 ± 0.1 m, mass = 74.6 ± 14.8 kg) and 50 individuals with no history of substantial lower extremity injury or surgery who served as controls (age = 23.3 ± 4.4 years, height = 1.7 ± 0.1 m, mass = 67.4 ± 13.2 kg). Torque variability, strength, and central activation ratio (CAR) were calculated from 3-second maximal knee-extension contraction trials (90° of flexion) with a superimposed electrical stimulus. All participants completed the International Knee Documentation Committee (IKDC) Subjective Knee Evaluation Form, and we determined the number of months after surgery. Group differences were assessed using independent-samples t tests. Correlation coefficients were calculated among torque variability, strength, CAR, months after surgery, and IKDC scores. Torque variability, strength, CAR, and months after surgery were regressed on IKDC scores using stepwise, multiple linear regression. Torque variability was greater and strength, CAR, and IKDC scores were lower in the ACL-R group than in the control group (P < .05). Torque variability and strength were correlated with IKDC scores (P < .05). Torque variability, strength, and CAR were correlated with each other (P < .05). Torque variability alone accounted for 14.3% of the variance in IKDC scores. The combination of torque variability and number of months after surgery accounted for 21% of the variance in IKDC scores. Strength and CAR were excluded from the regression model. Knee-extension torque variability was moderately associated with IKDC scores in patients with a history of ACL-R. Torque variability combined with months after surgery predicted 21% of the variance in IKDC scores in these patients.
Isokinetic Extension Strength Is Associated With Single-Leg Vertical Jump Height
Fischer, Felix; Blank, Cornelia; Dünnwald, Tobias; Gföller, Peter; Herbst, Elmar; Hoser, Christian; Fink, Christian
2017-01-01
Background: Isokinetic strength testing is an important tool in the evaluation of the physical capacities of athletes as well as for decision making regarding return to sports after anterior cruciate ligament (ACL) reconstruction in both athletes and the lay population. However, isokinetic testing is time consuming and requires special testing equipment. Hypothesis: A single-jump test, regardless of leg dominance, may provide information regarding knee extension strength through the use of correlation analysis of jump height and peak torque of isokinetic muscle strength. Study Design: Cross-sectional study; Level of evidence, 3. Methods: A total of 169 patients who underwent ACL reconstruction were included in this study. Isokinetic testing was performed on the injured and noninjured legs. Additionally, a single-leg countermovement jump was performed to assess jump height using a jump accelerometer sensor. Extension strength values were used to assess the association between isokinetic muscle strength and jump height. Results: The sample consisted of 60 female (mean age, 20.8 ± 8.3 years; mean weight, 61.7 ± 6.5 kg; mean height, 167.7 ± 5.3 cm) and 109 male (mean age, 23.2 ± 7.7 years; mean weight, 74.6 ± 10.2 kg; mean height, 179.9 ± 6.9 cm) patients. Bivariate correlation analysis showed an association (r = 0.56, P < .001) between jump height and isokinetic extension strength on the noninvolved side as well as an association (r = 0.52, P < .001) for the involved side. Regression analysis showed that in addition to jump height (beta = 0.49, P < .001), sex (beta = –0.17, P = .008) and body mass index (beta = 0.37, P < .001) affected isokinetic strength. The final model explained 51.1% of the variance in isokinetic muscle strength, with jump height having the strongest impact (beta = 0.49, P < .001) and explaining 31.5% of the variance. Conclusion: Initial analysis showed a strong association between isokinetic strength and jump height. The study population encompassed various backgrounds, skill levels, and activity profiles, which might have affected the outcome. Even after controlling for age and sex, isokinetic strength was still moderately associated with jump height. Therefore, the jump technique and type of sport should be considered in future research. PMID:29147670
A Versatile Omnibus Test for Detecting Mean and Variance Heterogeneity
Bailey, Matthew; Kauwe, John S. K.; Maxwell, Taylor J.
2014-01-01
Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (GxG), or gene-by-environment (GxE) interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRTMV) or either effect alone (LRTM or LRTV) in the presence of covariates. Using extensive simulations for our method and others we found that all parametric tests were sensitive to non-normality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant we demonstrate how linkage disequilibrium (LD) can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D’ and relatively low r2 values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect gene-by-gene interactions and also how vQTL are related to relationship loci (rQTL) and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait. PMID:24482837
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
Observed spatiotemporal variability of boundary-layer turbulence over flat, heterogeneous terrain
NASA Astrophysics Data System (ADS)
Maurer, V.; Kalthoff, N.; Wieser, A.; Kohler, M.; Mauder, M.; Gantner, L.
2016-02-01
In the spring of 2013, extensive measurements with multiple Doppler lidar systems were performed. The instruments were arranged in a triangle with edge lengths of about 3 km in a moderately flat, agriculturally used terrain in northwestern Germany. For 6 mostly cloud-free convective days, vertical velocity variance profiles were calculated. Weighted-averaged surface fluxes proved to be more appropriate than data from individual sites for scaling the variance profiles; but even then, the scatter of profiles was mostly larger than the statistical error. The scatter could not be explained by mean wind speed or stability, whereas time periods with significantly increased variance contained broader thermals. Periods with an elevated maximum of the variance profiles could also be related to broad thermals. Moreover, statistically significant spatial differences of variance were found. They were not influenced by the existing surface heterogeneity. Instead, thermals were preserved between two sites when the travel time was shorter than the large-eddy turnover time. At the same time, no thermals passed for more than 2 h at a third site that was located perpendicular to the mean wind direction in relation to the first two sites. Organized structures of turbulence with subsidence prevailing in the surroundings of thermals can thus partly explain significant spatial variance differences existing for several hours. Therefore, the representativeness of individual variance profiles derived from measurements at a single site cannot be assumed.
Centralization of symptoms and lumbar range of motion in patients with low back pain.
Bybee, Ronald F; Olsen, Denise L; Cantu-Boncser, Gloria; Allen, Heather Condie; Byars, Allyn
2009-05-01
This quasi-experimental repeated measures study examined the relationship between centralization of symptoms and lumbar flexion and extension range of motion (ROM) in patients with low back pain. Rapid and lasting changes in lumbar ROM have been noted with centralization of symptoms. However, no study has objectively measured the changes in lumbar ROM occurring with centralization. Forty-two adult subjects (mean age, 45.68 years; SD=15.76 years) with low back pain and associated lower extremity symptoms were followed by McKenzie trained physical therapists. Subjects' lumbar ROM was measured at the beginning and end of each patient visit by using double inclinometers, and pain location was documented. Subjects were grouped as 1) centralized, 2) centralizing, or 3) noncentralized for comparisons of symptom and ROM changes. Data were analyzed by using multivariate analysis of variance and one-way analysis of variance. Significance was set at 0.05. A significant difference was found between initial and final mean extension ROM in the centralized and centralizing groups (p=0.003). No significant difference was found in the noncentralized group (p<0.05). Subjects (n=23) who demonstrated a change in pain location during the initial visit also showed a significant (p<0.001) change in extension ROM, whereas patients with no change in pain location (n=19) did not (p=0.848). Lumbar extension ROM increased as centralization occurred.
Asundi, Krishna; Johnson, Peter W; Dennerlein, Jack T
2012-01-01
To determine the number of direct measurements needed to obtain a representative estimate of typing force and wrist kinematics, continuous measures of keyboard reaction force and wrist joint angle were collected at the workstation of 22 office workers while they completed their own work over three days, six hours per day. Typing force and wrist kinematics during keyboard, mouse and idle activities were calculated for each hour of measurement along with variance in measurements between subjects and between day and hour within subjects. Variance in measurements between subjects was significantly greater than variance in measurements between days and hours within subjects. Therefore, we concluded a single, one-hour period of continuous measures is sufficient to identify differences in typing force and wrist kinematics between subjects. Within subjects, day and hour of measurement had a significant effect on some measures and thus should be accounted for when comparing measures within a subject. The dose response relationship between exposure to computer related biomechanical risk factors and musculoskeletal disorders is poorly understood due to the difficulty and cost of direct measures. This study demonstrates a single hour of direct continuous measures is sufficient to identify differences in wrist kinematics and typing force between individuals.
Kappa statistic for clustered matched-pair data.
Yang, Zhao; Zhou, Ming
2014-07-10
Kappa statistic is widely used to assess the agreement between two procedures in the independent matched-pair data. For matched-pair data collected in clusters, on the basis of the delta method and sampling techniques, we propose a nonparametric variance estimator for the kappa statistic without within-cluster correlation structure or distributional assumptions. The results of an extensive Monte Carlo simulation study demonstrate that the proposed kappa statistic provides consistent estimation and the proposed variance estimator behaves reasonably well for at least a moderately large number of clusters (e.g., K ≥50). Compared with the variance estimator ignoring dependence within a cluster, the proposed variance estimator performs better in maintaining the nominal coverage probability when the intra-cluster correlation is fair (ρ ≥0.3), with more pronounced improvement when ρ is further increased. To illustrate the practical application of the proposed estimator, we analyze two real data examples of clustered matched-pair data. Copyright © 2014 John Wiley & Sons, Ltd.
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
ERIC Educational Resources Information Center
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time series to detect when the geophysical signal, here the ground motion, emerges from the noise.
Akiyama, Tsuyoshi; Tsuda, Hitoshi; Matsumoto, Satoko; Miyake, Yuko; Kawamura, Yoshiya; Noda, Toshie; Akiskal, Kareen K; Akiskal, Hagop S
2005-03-01
In Japan, Kraepelin's descriptions on four "fundamental states" of manic depressive illness, the concepts of schizoid temperament by Kretschmer and obsessional and melancholic type temperament by Shimoda and Tellenbach have been widely accepted. This research investigates the construct validity of these temperaments through factor analysis. TEMPS-A measured depressive, cyclothymic, hyperthymic and irritable temperaments and MPT rigidity, esoteric and isolation subscales measured, respectively, melancholic type and schizoid temperaments. Factor analysis was implemented with TEMPS-A alone and TEMPS-A and MPT combined data. With TEMPS-A alone analysis, Factor 1 included 1 depressive, 11 cyclothymic and 12 irritable temperament items with a factor loading higher than 0.4; Factor 2 included 1 depressive and 10 hyperthymic temperament items; and Factor 3 included 2 depressive temperament items only. With TEMPS-A and MPT combined data, Factor 1 included 3 depressive, 11 cyclothymic and 5 irritable temperament items with a factor loading higher than 0.4 (interpreted as the central cyclothymic tendency for all affective temperaments along Kretschmerian lines and accounting for 11.7% of the variance); Factor 2 included 6 hyperthymic temperament items (6.22% of variance); Factor 3 included 1 cyclothymic, 7 irritable and 1 schizoid temperament items (interpreted as the irritable temperament and accounting for 3.24% of the variance); Factor 4 included 1 depressive temperament and 5 melancholic type items (interpreted as the latter, accounting for 2.66% of the variance); Factor 5 included 5 depressive temperament items, along interpersonal sensitivity and passivity lines, and accounting for 2.31% of the variance; and Factor 6 included 4 schizoid temperament items accounting for 2.07% of the variance. We did not use the Kasahara scale, which some believe to better capture the Japanese melancholic type. Sample was 70% male. These analyses confirm the factor validity of depressive, hyperthymic, cyclothymic and irritable temperaments (TEMPS-A), as well as the melancholic type and the schizoid temperament (MPT). Traits of the depressive and melancholic types emerge as rather distinct. Indeed, our results permit the delineation of an interpersonally sensitive type that "gives in to others" as the core features of the depressive temperament; this is to be contrasted with the higher functioning, perfectionistic, work-oriented melancholic type. Mood dysregulation is represented by the largest number of traits in this population. Contrary to a widely held belief that the melancholic type with its devotion to work and to others is the signature temperament in Japan, cyclothymic traits account for the largest variance in this nonclinical population. Hyperthymic temperament, melancholic type and schizoid temperaments appear largely independent of mood dysregulation. In this Japanese population, TEMPS-A may identify temperament constructs more comprehensively when implemented with melancholic type and schizoid temperament question items added to it. The proposed new Japanese Temperament and Personality (JTP) Scale has self-rated items divided into six subscales.
Steele, James; Bruce-Low, Stewart; Smith, Dave; Jessop, David; Osborne, Neil
2014-12-01
Chronic low back pain is a multifactorial condition with many dysfunctions including gait variability. The lumbar spine and its musculature are involved during gait and in chronic low back pain the lumbar extensors are often deconditioned. It was therefore of interest to examine relationships between lumbar kinematic variability during gait, with pain, disability and isolated lumbar extension strength in participants with chronic low back pain. Twenty four participants with chronic low back pain were assessed for lumbar kinematics during gait, isolated lumbar extension strength, pain, and disability. Angular displacement and kinematic waveform pattern and offset variability were examined. Angular displacement and kinematic waveform pattern and offset variability differed across movement planes; displacement was highest and similar in frontal and transverse planes, and pattern variability and offset variability higher in the sagittal plane compared to frontal and transverse planes which were similar. Spearman's correlations showed significant correlations between transverse plane pattern variability and isolated lumbar extension strength (r=-.411) and disability (r=.401). However, pain was not correlated with pattern variability in any plane. The r(2) values suggested 80.5% to 86.3% of variance was accounted for by other variables. Considering the lumbar extensors role in gait, the relationship between both isolated lumbar extension strength and disability with transverse plane pattern variability suggests that gait variability may result in consequence of lumbar extensor deconditioning or disability accompanying chronic low back pain. However, further study should examine the temporality of these relationships and other variables might account for the unexplained variance. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, Peter; Varghese, Philip; Goldstein, David
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less
Co-evolution of payoff strategy and interaction strategy in prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Zhang, Kangjie; Cheng, Hongyan
2016-11-01
Co-evolutionary dynamical models, providing a realistic paradigm for investigating complex system, have been extensively studied. In this paper, the co-evolution of payoff strategy and interaction strategy is studied. Starting with an initial Gaussian distribution of payoff strategy r with the mean u and the variance q, we focus on the final distribution of the payoff strategy. We find that final distribution of the payoff strategy may display different structures depending on parameters. In the ranges u < - 1 and u > 3, the distribution displays a single-peak structure which is symmetric about r = u. The distribution manifests itself as a double-peak structure in the range - 1 < u < 3 although a fake three-peak structure shows up in range 1 < u < 2. The explanations on the formation of different types of payoff strategy distributions are presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... final actions to both issue a site- specific treatment variance to U.S. Ecology Nevada (USEN) in Beatty... me? This action applies only to U.S. Ecology Nevada located in Beatty, Nevada and to Chemical Waste... This Variance A. U.S. Ecology Nevada Petition B. What Type and How Much Waste Will be Subject to This...
ERIC Educational Resources Information Center
Lix, Lisa M.; And Others
1996-01-01
Meta-analytic techniques were used to summarize the statistical robustness literature on Type I error properties of alternatives to the one-way analysis of variance "F" test. The James (1951) and Welch (1951) tests performed best under violations of the variance homogeneity assumption, although their use is not always appropriate. (SLD)
Integrating mean and variance heterogeneities to identify differentially expressed genes.
Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen
2016-12-06
In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment-wide significant MVDE genes. Our results indicate tremendous potential gain of integrating informative variance heterogeneity after adjusting for global confounders and background data structure. The proposed informative integration test better summarizes the impacts of condition change on expression distributions of susceptible genes than do the existent competitors. Therefore, particular attention should be paid to explicitly exploit the variance heterogeneity induced by condition change in functional genomics analysis.
Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).
ERIC Educational Resources Information Center
Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.
2017-01-01
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
Itoh, Gento; Ishii, Hideyuki; Kato, Haruyasu; Nagano, Yasuharu; Hayashi, Hiroteru; Funasaki, Hiroki
2018-01-01
Some studies have listed motions that may cause Osgood-Schlatter disease, but none have quantitatively assessed the load on the tibial tubercle by such motions. To quantitatively identify the load on the tibial tubercle through a biomechanical approach using various motions that may cause Osgood-Schlatter disease, and to compare the load between different motions. Eight healthy male subjects were included. They conducted 4 types of kicks with a soccer ball, 2 types of runs, 2 types of squats, 2 types of jump landings, 2 types of stops, 1 type of turn, and 1 type of cutting motion. The angular impulse was calculated for knee extension moments ≥1.0 Nm/kg, ≥1.5 Nm/kg, ≥2.0 Nm/kg, and ≥2.5 Nm/kg. After analysis of variance, the post-hoc test was used to perform pairwise comparisons between all groups. The motion with the highest mean angular impulse of knee extension moment ≥1.0 Nm/kg was the single-leg landing after a jump, and that with the second highest mean was the cutting motion. At ≥1.5 Nm/kg, ≥2.0 Nm/kg, and ≥2.5 Nm/kg, the cutting motion was the highest, followed by the jump with a single-leg landing. They have a large load, and are associated with a higher risk of developing Osgood-Schlatter disease. The mean angular impulse of the 2 types of runs was small at all the indicators. Motions with a high risk of developing Osgood-Schlatter disease and low-risk motions can be assessed in further detail if future studies can quantify the load and number of repetitions that may cause Osgood-Schlatter disease while considering age and the development stage. Scheduled training regimens that balance load on the tibial tubercle with low-load motions after a training day of many load-intensive motions may prevent athletes from developing Osgood-Schlatter disease and increase their participation in sports.
Ishii, Hideyuki; Kato, Haruyasu; Nagano, Yasuharu; Hayashi, Hiroteru; Funasaki, Hiroki
2018-01-01
Background Some studies have listed motions that may cause Osgood-Schlatter disease, but none have quantitatively assessed the load on the tibial tubercle by such motions. Purposes To quantitatively identify the load on the tibial tubercle through a biomechanical approach using various motions that may cause Osgood-Schlatter disease, and to compare the load between different motions. Methods Eight healthy male subjects were included. They conducted 4 types of kicks with a soccer ball, 2 types of runs, 2 types of squats, 2 types of jump landings, 2 types of stops, 1 type of turn, and 1 type of cutting motion. The angular impulse was calculated for knee extension moments ≥1.0 Nm/kg, ≥1.5 Nm/kg, ≥2.0 Nm/kg, and ≥2.5 Nm/kg. After analysis of variance, the post-hoc test was used to perform pairwise comparisons between all groups. Results/Conclusions The motion with the highest mean angular impulse of knee extension moment ≥1.0 Nm/kg was the single-leg landing after a jump, and that with the second highest mean was the cutting motion. At ≥1.5 Nm/kg, ≥2.0 Nm/kg, and ≥2.5 Nm/kg, the cutting motion was the highest, followed by the jump with a single-leg landing. They have a large load, and are associated with a higher risk of developing Osgood-Schlatter disease. The mean angular impulse of the 2 types of runs was small at all the indicators. Clinical relevance Motions with a high risk of developing Osgood-Schlatter disease and low-risk motions can be assessed in further detail if future studies can quantify the load and number of repetitions that may cause Osgood-Schlatter disease while considering age and the development stage. Scheduled training regimens that balance load on the tibial tubercle with low-load motions after a training day of many load-intensive motions may prevent athletes from developing Osgood-Schlatter disease and increase their participation in sports. PMID:29309422
ERIC Educational Resources Information Center
Shemesh, Michal; Lazarowitz, Reuven
This study investigated: (1) whether boys and girls master formal reasoning tasks to the same degree at the same age; (2) if the variance of boys' and girls' performance in formal tasks could be predicted by the same cognitive learning abilities; and (3) what are the main and interactional effects of age, sex, and school type on the variance of…
Hydrogeological characterization of flow system in a karstic aquifer, Seymareh dam, Iran
NASA Astrophysics Data System (ADS)
Behrouj Peely, Ahmad; Mohammadi, Zargham; Raeisi, Ezzatollah; Solgi, Khashayar; Mosavi, Mohammad J.; Kamali, Majid
2018-07-01
In order to determine the characteristics of the flow system in a karstic aquifer, an extensive hydrogeological study includes dye tracing test was conducted. The aquifer suited left abutment of Seymareh Dam, in Ravandi Anticline and discharges by more than 50 springs in the southern flank. Flow system in the aquifer is mainly controlled by the reservoir of Seymareh Dam. Time variations of the spring discharge and water table in the observation wells were highly correlated with the reservoir water level. The average groundwater velocity ranges from 0.2 to more than 14 m/h based on the dye tracing test. The probable flow paths were differentiated in two groups including the flow paths in the northern and southern flanks of Ravandi Anticline. Types of groundwater flow in the proposed flow paths are determined as diffuse or conduit flow type considering groundwater velocity and shape of the breakthrough curves. An index is proposed for differentiation of diffuse and conduit flow system based on relationship of groundwater velocity and hydraulic gradient. Dominant geometry of the flow routs (e.g., conduit diameter and fracture aperture) is estimated for the groundwater flow paths toward the springs. Based on velocity variations and variance coefficient of the water table and discharge of springs on map view a major karst conduit was probably developed in the aquifer. This research emphasizes applying of an extensive hydrogeological study for characterization of flow system in the karst aquifer.
Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images
NASA Astrophysics Data System (ADS)
Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.
2017-10-01
The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.
Testing Components of a Self-Management Theory in Adolescents With Type 1 Diabetes Mellitus.
Verchota, Gwen; Sawin, Kathleen J
The role of self-management in adolescents with type 1 diabetes mellitus is not well understood. The purpose of the research was to examine the relationship of key individual and family self-management theory, context, and process variables on proximal (self-management behaviors) and distal (hemoglobin A1c and diabetes-specific health-related quality of life) outcomes in adolescents with type 1 diabetes. A correlational, cross-sectional study was conducted to identify factors contributing to outcomes in adolescents with Type 1 diabetes and examine potential relationships between context, process, and outcome variables delineated in individual and family self-management theory. Participants were 103 adolescent-parent dyads (adolescents ages 12-17) with Type 1 diabetes from a Midwest, outpatient, diabetes clinic. The dyads completed a self-report survey including instruments intended to measure context, process, and outcome variables from individual and family self-management theory. Using hierarchical multiple regression, context (depressive symptoms) and process (communication) variables explained 37% of the variance in self-management behaviors. Regimen complexity-the only significant predictor-explained 11% of the variance in hemoglobin A1c. Neither process variables nor self-management behaviors were significant. For the diabetes-specific health-related quality of life outcome, context (regimen complexity and depressive symptoms) explained 26% of the variance at step 1; an additional 9% of the variance was explained when process (self-efficacy and communication) variables were added at step 2; and 52% of the variance was explained when self-management behaviors were added at Step 3. In the final model, three variables were significant predictors: depressive symptoms, self-efficacy, and self-management behaviors. The individual and family self-management theory can serve as a cogent theory for understanding key concepts, processes, and outcomes essential to self-management in adolescents and families dealing with Type 1 diabetes mellitus.
2015-01-01
Elastic and inelastic close-coupling (CC) calculations have been used to extract information about the corrugation amplitude and the surface vibrational atomic displacement by fitting to several experimental diffraction patterns. To model the three-dimensional interaction between the He atom and the Bi(111) surface under investigation, a corrugated Morse potential has been assumed. Two different types of calculations are used to obtain theoretical diffraction intensities at three surface temperatures along the two symmetry directions. Type one consists of solving the elastic CC (eCC) and attenuating the corresponding diffraction intensities by a global Debye–Waller (DW) factor. The second one, within a unitary theory, is derived from merely solving the inelastic CC (iCC) equations, where no DW factor is necessary to include. While both methods arrive at similar predictions for the peak-to-peak corrugation value, the variance of the value obtained by the iCC method is much better. Furthermore, the more extensive calculation is better suited to model the temperature induced signal asymmetries and renders the inclusion for a second Debye temperature for the diffraction peaks futile. PMID:26257838
Assessing ergonomic risks of software: Development of the SEAT.
Peres, S Camille; Mehta, Ranjana K; Ritchey, Paul
2017-03-01
Software utilizing interaction designs that require extensive dragging or clicking of icons may increase users' risks for upper extremity cumulative trauma disorders. The purpose of this research is to develop a Self-report Ergonomic Assessment Tool (SEAT) for assessing the risks of software interaction designs and facilitate mitigation of those risks. A 28-item self-report measure was developed by combining and modifying items from existing industrial ergonomic tools. Data were collected from 166 participants after they completed four different tasks that varied by method of input (touch or keyboard and mouse) and type of task (selecting or typing). Principal component analysis found distinct factors associated with stress (i.e., demands) and strain (i.e., response). Repeated measures analyses of variance showed that participants could discriminate the different strain induced by the input methods and tasks. However, participants' ability to discriminate between the stressors associated with that strain was mixed. Further validation of the SEAT is necessary but these results indicate that the SEAT may be a viable method of assessing ergonomics risks presented by software design. Copyright © 2016 Elsevier Ltd. All rights reserved.
High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis
Daye, Z. John; Chen, Jinbo; Li, Hongzhe
2011-01-01
Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833
Analysis of Wind Tunnel Polar Replicates Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard; Micol, John R.
2010-01-01
The role of variance in a Modern Design of Experiments analysis of wind tunnel data is reviewed, with distinctions made between explained and unexplained variance. The partitioning of unexplained variance into systematic and random components is illustrated, with examples of the elusive systematic component provided for various types of real-world tests. The importance of detecting and defending against systematic unexplained variance in wind tunnel testing is discussed, and the random and systematic components of unexplained variance are examined for a representative wind tunnel data set acquired in a test in which a missile is used as a test article. The adverse impact of correlated (non-independent) experimental errors is described, and recommendations are offered for replication strategies that facilitate the quantification of random and systematic unexplained variance.
Job Tasks as Determinants of Thoracic Aerosol Exposure in the Cement Production Industry.
Notø, Hilde; Nordby, Karl-Christian; Skare, Øivind; Eduard, Wijnand
2017-12-15
The aims of this study were to identify important determinants and investigate the variance components of thoracic aerosol exposure for the workers in the production departments of European cement plants. Personal thoracic aerosol measurements and questionnaire information (Notø et al., 2015) were the basis for this study. Determinants categorized in three levels were selected to describe the exposure relationships separately for the job types production, cleaning, maintenance, foreman, administration, laboratory, and other jobs by linear mixed models. The influence of plant and job determinants on variance components were explored separately and also combined in full models (plant&job) against models with no determinants (null). The best mixed models (best) describing the exposure for each job type were selected by the lowest Akaike information criterion (AIC; Akaike, 1974) after running all possible combination of the determinants. Tasks that significantly increased the thoracic aerosol exposure above the mean level for production workers were: packing and shipping, raw meal, cement and filter cleaning, and de-clogging of the cyclones. For maintenance workers, time spent with welding and dismantling before repair work increased the exposure while time with electrical maintenance and oiling decreased the exposure. Administration work decreased the exposure among foremen. A subjective tidiness factor scored by the research team explained up to a 3-fold (cleaners) variation in thoracic aerosol levels. Within-worker (WW) variance contained a major part of the total variance (35-58%) for all job types. Job determinants had little influence on the WW variance (0-4% reduction), some influence on the between-plant (BP) variance (from 5% to 39% reduction for production, maintenance, and other jobs respectively but an 79% increase for foremen) and a substantial influence on the between-worker within-plant variance (30-96% for production, foremen, and other workers). Plant determinants had little influence on the WW variance (0-2% reduction), some influence on the between-worker variance (0-1% reduction and 8% increase), and considerable influence on the BP variance (36-58% reduction) compared to the null models. Some job tasks contribute to low levels of thoracic aerosol exposure and others to higher exposure among cement plant workers. Thus, job task may predict exposure in this industry. Dust control measures in the packing and shipping departments and in the areas of raw meal and cement handling could contribute substantially to reduce the exposure levels. Rotation between low and higher exposed tasks may contribute to equalize the exposure levels between high and low exposed workers as a temporary solution before more permanent dust reduction measures is implemented. A tidy plant may reduce the overall exposure for almost all workers no matter of job type. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Contribution of hamstring fatigue to quadriceps inhibition following lumbar extension exercise.
Hart, Joseph M; Kerrigan, D Casey; Fritz, Julie M; Saliba, Ethan N; Gansneder, Bruce; Ingersoll, Christopher D
2006-01-01
The purpose of this study was to determine the contribution of hamstrings and quadriceps fatigue to quadriceps inhibition following lumbar extension exercise. Regression models were calculated consisting of the outcome variable: quadriceps inhibition and predictor variables: change in EMG median frequency in the quadriceps and hamstrings during lumbar fatiguing exercise. Twenty-five subjects with a history of low back pain were matched by gender, height and mass to 25 healthy controls. Subjects performed two sets of fatiguing isometric lumbar extension exercise until mild (set 1) and moderate (set 2) fatigue of the lumbar paraspinals. Quadriceps and hamstring EMG median frequency were measured while subjects performed fatiguing exercise. A burst of electrical stimuli was superimposed while subjects performed an isometric maximal quadriceps contraction to estimate quadriceps inhibition after each exercise set. Results indicate the change in hamstring median frequency explained variance in quadriceps inhibition following the exercise sets in the history of low back pain group only. Change in quadriceps median frequency explained variance in quadriceps inhibition following the first exercise set in the control group only. In conclusion, persons with a history of low back pain whose quadriceps become inhibited following lumbar paraspinal exercise may be adapting to the fatigue by using their hamstring muscles more than controls. Key PointsA neuromuscular relationship between the lumbar paraspinals and quadriceps while performing lumbar extension exercise may be influenced by hamstring muscle fatigue.QI following lumbar extension exercise in persons with a history of LBP group may involve significant contribution from the hamstring muscle group.More hamstring muscle contribution may be a necessary adaptation in the history of LBP group due to weaker and more fatigable lumbar extensors.
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
Budde, M.E.; Tappan, G.; Rowland, James; Lewis, J.; Tieszen, L.L.
2004-01-01
The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.
A Note on the Heterogeneous Choice Model
ERIC Educational Resources Information Center
Rohwer, Goetz
2015-01-01
The heterogeneous choice model (HCM) has been proposed as an extension of the standard logit and probit models, which allows taking into account different error variances of explanatory variables. In this note, I show that in an important special case, this model is just another way to specify an interaction effect.
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
VAIDYANATHAN, UMA; MALONE, STEPHEN M.; MILLER, MICHAEL B.; McGUE, MATT; IACONO, WILLIAM G.
2014-01-01
Acoustic startle responses have been studied extensively in relation to individual differences and psychopathology. We examined three indices of the blink response in a picture-viewing paradigm—overall startle magnitude across all picture types, and aversive and pleasant modulation scores—in 3,323 twins and parents. Biometric models and molecular genetic analyses showed that half the variance in overall startle was due to additive genetic effects. No single nucleotide polymorphism was genome-wide significant, but GRIK3 did produce a significant effect when examined as part of a candidate gene set. In contrast, emotion modulation scores showed little evidence of heritability in either biometric or molecular genetic analyses. However, in a genome-wide scan, PARP14 did produce a significant effect for aversive modulation. We conclude that, although overall startle retains potential as an endophenotype, emotion-modulated startle does not. PMID:25387708
Jackknife variance of the partial area under the empirical receiver operating characteristic curve.
Bandos, Andriy I; Guo, Ben; Gur, David
2017-04-01
Receiver operating characteristic analysis provides an important methodology for assessing traditional (e.g., imaging technologies and clinical practices) and new (e.g., genomic studies, biomarker development) diagnostic problems. The area under the clinically/practically relevant part of the receiver operating characteristic curve (partial area or partial area under the receiver operating characteristic curve) is an important performance index summarizing diagnostic accuracy at multiple operating points (decision thresholds) that are relevant to actual clinical practice. A robust estimate of the partial area under the receiver operating characteristic curve is provided by the area under the corresponding part of the empirical receiver operating characteristic curve. We derive a closed-form expression for the jackknife variance of the partial area under the empirical receiver operating characteristic curve. Using the derived analytical expression, we investigate the differences between the jackknife variance and a conventional variance estimator. The relative properties in finite samples are demonstrated in a simulation study. The developed formula enables an easy way to estimate the variance of the empirical partial area under the receiver operating characteristic curve, thereby substantially reducing the computation burden, and provides important insight into the structure of the variability. We demonstrate that when compared with the conventional approach, the jackknife variance has substantially smaller bias, and leads to a more appropriate type I error rate of the Wald-type test. The use of the jackknife variance is illustrated in the analysis of a data set from a diagnostic imaging study.
American grandparents providing extensive child care to their grandchildren: prevalence and profile.
Fuller-Thomson, E; Minkler, M
2001-04-01
This study sought to determine the prevalence and profile of grandparents providing extensive care for a grandchild (grandparents who provide 30+ hours per week or 90+ nights per year of child care, yet are not the primary caregiver of the grandchild). Secondary analysis of the 3,260 grandparent respondents in the 1992-94 National Survey of Families and Households (NSFH). Extensively caregiving grandparents were compared with custodial grandparents (those with primary responsibility for raising a grandchild for 6+ months), noncaregivers, occasional caregivers (<10 hours per week), and intermediate caregivers using chi-square tests, one-way analysis of variance tests, and logistic regression analyses. Close to 7% of all grandparents provided extensive caregiving, as did 14.9% of those who had provided any grandchild care in the last month. Extensive caregivers most closely resembled custodial caregivers and had least in common with those grandparents who never provided child care. Areas for future research, policy, and practice are highlighted, including the potential impact of welfare reform legislation on extensively caregiving grandparents.
A powerful test for Balaam's design.
Mori, Joji; Kano, Yutaka
2015-01-01
The crossover trial design (AB/BA design) is often used to compare the effects of two treatments in medical science because it performs within-subject comparisons, which increase the precision of a treatment effect (i.e., a between-treatment difference). However, the AB/BA design cannot be applied in the presence of carryover effects and/or treatments-by-period interaction. In such cases, Balaam's design is a more suitable choice. Unlike the AB/BA design, Balaam's design inflates the variance of an estimate of the treatment effect, thereby reducing the statistical power of tests. This is a serious drawback of the design. Although the variance of parameter estimators in Balaam's design has been extensively studied, the estimators of the treatment effect to improve the inference have received little attention. If the estimate of the treatment effect is obtained by solving the mixed model equations, the AA and BB sequences are excluded from the estimation process. In this study, we develop a new estimator of the treatment effect and a new test statistic using the estimator. The aim is to improve the statistical inference in Balaam's design. Simulation studies indicate that the type I error of the proposed test is well controlled, and that the test is more powerful and has more suitable characteristics than other existing tests when interactions are substantial. The proposed test is also applied to analyze a real dataset. Copyright © 2015 John Wiley & Sons, Ltd.
Donti, Olyvia; Bogdanis, Gregory C; Kritikou, Maria; Donti, Anastasia; Theodorakou, Kalliopi
2016-06-01
This study examined the association between physical fitness and a technical execution score in rhythmic gymnasts varying in the performance level. Forty-six young rhythmic gymnasts (age: 9.9 ±1.3 years) were divided into two groups (qualifiers, n=24 and non-qualifiers, n=22) based on the results of the National Championships. Gymnasts underwent a series of physical fitness tests and technical execution was evaluated in a routine without apparatus. There were significant differences between qualifiers and non-qualifiers in the technical execution score (p=0.01, d=1.0), shoulder flexion (p=0.01, d=0.8), straight leg raise (p=0.004, d=0.9), sideways leg extension (p=0.002, d=0.9) and body fat (p=.021, d=0.7), but no differences were found in muscular endurance and jumping performance. The technical execution score for the non-qualifiers was significantly correlated with shoulder extension (r=0.423, p<0.05), sideways leg extension (r=0.687, p<0.01), push ups (r=0.437, p<0.05) and body fat (r=0.642, p<0.01), while there was only one significant correlation with sideways leg extension (r=0.467, p<0.05) for the qualifiers. Multiple regression analysis revealed that sideways leg extension, body fat, and push ups accounted for a large part (62.9%) of the variance in the technical execution score for the non-qualifiers, while for the qualifiers, only 37.3% of the variance in the technical execution score was accounted for by sideways leg extension and spine flexibility. In conclusion, flexibility and body composition can effectively discriminate between qualifiers and non-qualifiers in youth rhythmic gymnastics. At the lower level of performance (non-qualifiers), physical fitness seems to have a greater effect on the technical execution score.
Biostatistics Series Module 10: Brief Overview of Multivariate Methods.
Hazra, Avijit; Gogtay, Nithya
2017-01-01
Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.
Markowitz portfolio optimization model employing fuzzy measure
NASA Astrophysics Data System (ADS)
Ramli, Suhailywati; Jaaman, Saiful Hafizah
2017-04-01
Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.
Variation of gene expression in Bacillus subtilis samples of fermentation replicates.
Zhou, Ying; Yu, Wen-Bang; Ye, Bang-Ce
2011-06-01
The application of comprehensive gene expression profiling technologies to compare wild and mutated microorganism samples or to assess molecular differences between various treatments has been widely used. However, little is known about the normal variation of gene expression in microorganisms. In this study, an Agilent customized microarray representing 4,106 genes was used to quantify transcript levels of five-repeated flasks to assess normal variation in Bacillus subtilis gene expression. CV analysis and analysis of variance were employed to investigate the normal variance of genes and the components of variance, respectively. The results showed that above 80% of the total variation was caused by biological variance. For the 12 replicates, 451 of 4,106 genes exhibited variance with CV values over 10%. The functional category enrichment analysis demonstrated that these variable genes were mainly involved in cell type differentiation, cell type localization, cell cycle and DNA processing, and spore or cyst coat. Using power analysis, the minimal biological replicate number for a B. subtilis microarray experiment was determined to be six. The results contribute to the definition of the baseline level of variability in B. subtilis gene expression and emphasize the importance of replicate microarray experiments.
Radiographic Outcomes of Volar Locked Plating for Distal Radius Fractures
Mignemi, Megan E.; Byram, Ian R.; Wolfe, Carmen C.; Fan, Kang-Hsien; Koehler, Elizabeth A.; Block, John J.; Jordanov, Martin I.; Watson, Jeffry T.; Weikert, Douglas R.; Lee, Donald H.
2013-01-01
Purpose To assess the ability of volar locked plating to achieve and maintain normal radiographic parameters for articular stepoff, volar tilt, radial inclination, ulnar variance, and radial height in distal radius fractures. Methods We performed a retrospective review of 185 distal radius fractures that underwent volar locked plating with a single plate design over a 5-year period. We reviewed radiographs and recorded measurements for volar tilt, radial inclination, ulnar variance, radial height, and articular stepoff. We used logistic regression to determine the association between return to radiographic standard norms and fracture type. Results At the first and final postoperative follow-up visits, we observed articular congruence less than 2 mm in 92% of fractures at both times. Normal volar tilt (11°) was restored in 46% at the first follow-up and 48% at the final one. Radial inclination (22°) was achieved in 44% at the first follow-up and 43% at the final one, and ulnar variance (01 ± 2 mm) was achieved in 53% at the first follow-up and 53% at the final one. In addition, radial height (14 ± 1mm) was restored in 14% at the first follow-up and 12% at the final one. More complex, intra-articular fractures (AO class B and C and Frykman types 3, 4, 7, and 8) were less likely to be restored to normal radiographic parameters. However, because of the small sample size for some fracture types, it was difficult to discover significant associations between fracture type and radiographic outcome. Conclusions Volar locked plating for distal radius fractures achieved articular stepoff less than 2 mm in most fractures but only restored and maintained normal radiographic measurements for volar tilt, radial inclination, and ulnar variance in 50% of fractures. The ability of volar locked plating to restore and maintain ulnar variance and volar tilt decreased with more complex intra-articular fracture types. PMID:23218558
Accounting for Dark Current Accumulated during Readout of Hubble's ACS/WFC Detectors
NASA Astrophysics Data System (ADS)
Ryon, Jenna E.; Grogin, Norman A.; Coe, Dan A.; ACS Team
2018-06-01
We investigate the properties of excess dark current accumulated during the 100-second full-frame readout of the Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) detectors. This excess dark current, called "readout dark", gives rise to ambient background gradients and hot columns in each ACS/WFC image. While readout dark signal is removed from science images during the bias correction step in CALACS, the additional noise from the readout dark is currently not taken into account. We develop a method to estimate the readout dark noise properties in ACS/WFC observations. We update the error (ERR) extensions of superbias images to include the appropriate noise from the ambient readout dark gradient and stable hot columns. In recent data, this amounts to about 5 e-/pixel added variance in the rows farthest from the WFC serial registers, and about 7 to 30 e-/pixel added variance along the stable hot columns. We also flag unstable hot columns in the superbias data quality (DQ) extensions. The new reference file pipeline for ACS/WFC implements these updates to our superbias creation process.
Discrimination of isotrigon textures using the Rényi entropy of Allan variances.
Gabarda, Salvador; Cristóbal, Gabriel
2008-09-01
We present a computational algorithm for isotrigon texture discrimination. The aim of this method consists in discriminating isotrigon textures against a binary random background. The extension of the method to the problem of multitexture discrimination is considered as well. The method relies on the fact that the information content of time or space-frequency representations of signals, including images, can be readily analyzed by means of generalized entropy measures. In such a scenario, the Rényi entropy appears as an effective tool, given that Rényi measures can be used to provide information about a local neighborhood within an image. Localization is essential for comparing images on a pixel-by-pixel basis. Discrimination is performed through a local Rényi entropy measurement applied on a spatially oriented 1-D pseudo-Wigner distribution (PWD) of the test image. The PWD is normalized so that it may be interpreted as a probability distribution. Prior to the calculation of the texture's PWD, a preprocessing filtering step replaces the original texture with its localized spatially oriented Allan variances. The anisotropic structure of the textures, as revealed by the Allan variances, turns out to be crucial later to attain a high discrimination by the extraction of Rényi entropy measures. The method has been empirically evaluated with a family of isotrigon textures embedded in a binary random background. The extension to the case of multiple isotrigon mosaics has also been considered. Discrimination results are compared with other existing methods.
Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee
2015-01-01
Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512
Fowler, Kevin; Whitlock, Michael C
2002-01-01
Fifty-two lines of Drosophila melanogaster founded by single-pair population bottlenecks were used to study the effects of inbreeding and environmental stress on phenotypic variance, genetic variance and survivorship. Cold temperature and high density cause reduced survivorship, but these stresses do not cause repeatable changes in the phenotypic variance of most wing morphological traits. Wing area, however, does show increased phenotypic variance under both types of environmental stress. This increase is no greater in inbred than in outbred lines, showing that inbreeding does not increase the developmental effects of stress. Conversely, environmental stress does not increase the extent of inbreeding depression. Genetic variance is not correlated with environmental stress, although the amount of genetic variation varies significantly among environments and lines vary significantly in their response to environmental change. Drastic changes in the environment can cause changes in phenotypic and genetic variance, but not in a way reliably predicted by the notion of 'stress'. PMID:11934358
Janssen, I; Steele, J R; Munro, B J; Brown, N A T
2015-06-01
Patellar tendinopathy is the most common knee injury incurred in volleyball, with its prevalence in elite athletes more than three times that of their sub-elite counterparts. The purpose of this study was to determine whether patellar tendinopathy risk factors differed between elite and sub-elite male volleyball players. Nine elite and nine sub-elite male volleyball players performed a lateral stop-jump block movement. Maximum vertical jump, training history, muscle extensibility and strength, three-dimensional landing kinematics (250 Hz), along with lower limb neuromuscular activation patterns (1500 Hz), and patellar tendon loading were collected during each trial. Multivariate analyses of variance (P < 0.05) assessed for between-group differences in risk factors or patellar tendon loading. Significant interaction effects were further evaluated using post-hoc univariate analysis of variance tests. Landing kinematics, neuromuscular activation patterns, patellar tendon loading, and most of the previously identified risk factors did not differ between the elite and sub-elite players. However, elite players participated in a higher training volume and had less quadriceps extensibility than sub-elite players. Therefore, high training volume is likely the primary contributor to the injury discrepancy between elite and sub-elite volleyball players. Interventions designed to reduce landing frequency and improve quadriceps extensibility are recommended to reduce patellar tendinopathy prevalence in volleyball players. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization
Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil
2015-01-01
We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data
Dazard, Jean-Eudes; Rao, J. Sunil
2012-01-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950
Excoffier, L; Smouse, P E; Quattro, J M
1992-06-01
We present here a framework for the study of molecular variation within a single species. Information on DNA haplotype divergence is incorporated into an analysis of variance format, derived from a matrix of squared-distances among all pairs of haplotypes. This analysis of molecular variance (AMOVA) produces estimates of variance components and F-statistic analogs, designated here as phi-statistics, reflecting the correlation of haplotypic diversity at different levels of hierarchical subdivision. The method is flexible enough to accommodate several alternative input matrices, corresponding to different types of molecular data, as well as different types of evolutionary assumptions, without modifying the basic structure of the analysis. The significance of the variance components and phi-statistics is tested using a permutational approach, eliminating the normality assumption that is conventional for analysis of variance but inappropriate for molecular data. Application of AMOVA to human mitochondrial DNA haplotype data shows that population subdivisions are better resolved when some measure of molecular differences among haplotypes is introduced into the analysis. At the intraspecific level, however, the additional information provided by knowing the exact phylogenetic relations among haplotypes or by a nonlinear translation of restriction-site change into nucleotide diversity does not significantly modify the inferred population genetic structure. Monte Carlo studies show that site sampling does not fundamentally affect the significance of the molecular variance components. The AMOVA treatment is easily extended in several different directions and it constitutes a coherent and flexible framework for the statistical analysis of molecular data.
Multi-Finger Interaction and Synergies in Finger Flexion and Extension Force Production
Park, Jaebum; Xu, Dayuan
2017-01-01
The aim of this study was to discover finger interaction indices during single-finger ramp tasks and multi-finger coordination during a steady state force production in two directions, flexion, and extension. Furthermore, the indices of anticipatory adjustment of elemental variables (i.e., finger forces) prior to a quick pulse force production were quantified. It is currently unknown whether the organization and anticipatory modulation of stability properties are affected by force directions and strengths of in multi-finger actions. We expected to observe a smaller finger independency and larger indices of multi-finger coordination during extension than during flexion due to both neural and peripheral differences between the finger flexion and extension actions. We also examined the indices of the anticipatory adjustment between different force direction conditions. The anticipatory adjustment could be a neural process, which may be affected by the properties of the muscles and by the direction of the motions. The maximal voluntary contraction (MVC) force was larger for flexion than for extension, which confirmed the fact that the strength of finger flexor muscles (e.g., flexor digitorum profundus) was larger than that of finger extensor (e.g., extensor digitorum). The analysis within the uncontrolled manifold (UCM) hypothesis was used to quantify the motor synergy of elemental variables by decomposing two sources of variances across repetitive trials, which identifies the variances in the uncontrolled manifold (VUCM) and that are orthogonal to the UCM (VORT). The presence of motor synergy and its strength were quantified by the relative amount of VUCM and VORT. The strength of motor synergies at the steady state was larger in the extension condition, which suggests that the stability property (i.e., multi-finger synergies) may be a direction specific quantity. However, the results for the existence of anticipatory adjustment; however, no difference between the directional conditions suggests that feed-forward synergy adjustment (changes in the stability property) may be at least independent of the magnitude of the task-specific apparent performance variables and its direction (e.g., flexion and extension forces). PMID:28674489
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
Weighted analysis of paired microarray experiments.
Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle
2005-01-01
In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
Analysis of Variance with Summary Statistics in Microsoft® Excel®
ERIC Educational Resources Information Center
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
Control Variate Estimators of Survivor Growth from Point Samples
Francis A. Roesch; Paul C. van Deusen
1993-01-01
Two estimators of the control variate type for survivor growth from remeasured point samples are proposed and compared with more familiar estimators. The large reductionsin variance, observed in many cases forestimators constructed with control variates, arealso realized in thisapplication. A simulation study yielded consistent reductions in variance which were often...
Balance Performance Is Task Specific in Older Adults.
Dunsky, Ayelet; Zeev, Aviva; Netz, Yael
2017-01-01
Balance ability among the elderly is a key component in the activities of daily living and is divided into two types: static and dynamic. For clinicians who wish to assess the risk of falling among their elderly patients, it is unclear if more than one type of balance test can be used to measure their balance impairment. In this study, we examined the association between static balance measures and two dynamic balance field tests. One hundred and twelve community-dwelling older adults (mean age 74.6) participated in the study. They underwent the Tetrax static postural assessment and then performed the Timed Up and Go (TUG) and the Functional Reach (FR) Test as dynamic balance tests. In general, low-moderate correlations were found between the two types of balance tests. For women, age and static balance parameters explained 28.1-40.4% of the variance of TUG scores and 14.6-24% of the variance of FR scores. For men, age and static balance parameters explained 9.5-31.2% of the variance of TUG scores and 23.9-41.7% of the variance of FR scores. Based on our findings, it is suggested that a combination of both static and dynamic tests be used for assessing postural balance ability.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Wavelets, ridgelets, and curvelets for Poisson noise removal.
Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc
2008-07-01
In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.
Hudson, Nathan W.; Lucas, Richard E.; Donnellan, M. Brent; Kushlev, Kostadin
2017-01-01
Kushlev, Dunn, and Lucas (2015) found that income predicts less daily sadness—but not greater happiness—among Americans. The present study used longitudinal data from an approximately representative German sample to replicate and extend these findings. Our results largely replicated Kushlev and colleagues’: income predicted less daily sadness (albeit with a smaller effect size), but was unrelated to happiness. Moreover, the association between income and sadness could not be explained by demographics, stress, or daily time-use. Extending Kushlev and colleagues’ findings, new analyses indicated that only between-persons variance in income (but not within-persons variance) predicted daily sadness—perhaps because there was relatively little within-persons variance in income. Finally, income predicted less daily sadness and worry, but not less anger or frustration—potentially suggesting that income predicts less “internalizing” but not less “externalizing” negative emotions. Together, our study and Kushlev and colleagues’ provide evidence that income robustly predicts select daily negative emotions—but not positive ones. PMID:29250303
Soave, David; Sun, Lei
2017-09-01
We generalize Levene's test for variance (scale) heterogeneity between k groups for more complex data, when there are sample correlation and group membership uncertainty. Following a two-stage regression framework, we show that least absolute deviation regression must be used in the stage 1 analysis to ensure a correct asymptotic χk-12/(k-1) distribution of the generalized scale (gS) test statistic. We then show that the proposed gS test is independent of the generalized location test, under the joint null hypothesis of no mean and no variance heterogeneity. Consequently, we generalize the recently proposed joint location-scale (gJLS) test, valuable in settings where there is an interaction effect but one interacting variable is not available. We evaluate the proposed method via an extensive simulation study and two genetic association application studies. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Adaptive cyclic physiologic noise modeling and correction in functional MRI.
Beall, Erik B
2010-03-30
Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Maximum step length: relationships to age and knee and hip extensor capacities.
Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B
2007-07-01
Maximum Step Length may be used to identify older adults at increased risk for falls. Since leg muscle weakness is a risk factor for falls, we tested the hypotheses that maximum knee and hip extension speed, strength, and power capacities would significantly correlate with Maximum Step Length and also that the "step out and back" Maximum Step Length [Medell, J.L., Alexander, N.B., 2000. A clinical measure of maximal and rapid stepping in older women. J. Gerontol. A Biol. Sci. Med. Sci. 55, M429-M433.] would also correlate with the Maximum Step Length of its two sub-tasks: stepping "out only" and stepping "back only". These sub-tasks will be referred to as versions of Maximum Step Length. Unimpaired younger (N=11, age=24[3]years) and older (N=10, age=73[5]years) women performed the above three versions of Maximum Step Length. Knee and hip extension speed, strength, and power capacities were determined on a separate day and regressed on Maximum Step Length and age group. Version and practice effects were quantified and subjective impressions of test difficulty recorded. Hypotheses were tested using linear regressions, analysis of variance, and Fisher's exact test. Maximum Step Length explained 6-22% additional variance in knee and hip extension speed, strength, and power capacities after controlling for age group. Within- and between-block and test-retest correlation values were high (>0.9) for all test versions. Shorter Maximum Step Lengths are associated with reduced knee and hip extension speed, strength, and power capacities after controlling for age. A single out-and-back step of maximal length is a feasible, rapid screening measure that may provide insight into underlying functional impairment, regardless of age.
Record length requirement of long-range dependent teletraffic
NASA Astrophysics Data System (ADS)
Li, Ming
2017-04-01
This article contributes the highlights mainly in two folds. On the one hand, it presents a formula to compute the upper bound of the variance of the correlation periodogram measurement of teletraffic (traffic for short) with long-range dependence (LRD) for a given record length T and a given value of the Hurst parameter H (Theorems 1 and 2). On the other hand, it proposes two formulas for the computation of the variance upper bound of the correlation periodogram measurement of traffic of fractional Gaussian noise (fGn) type and the generalized Cauchy (GC) type, respectively (Corollaries 1 and 2). They may constitute a reference guideline of record length requirement of traffic with LRD. In addition, record length requirement for the correlation periodogram measurement of traffic with either the Schuster type or the Bartlett one is studied and the present results about it show that both types of periodograms may be used for the correlation measurement of traffic with a pre-desired variance bound of correlation estimation. Moreover, real traffic in the Internet Archive by the Special Interest Group on Data Communication under the Association for Computing Machinery of US (ACM SIGCOMM) is analyzed in the case study in this topic.
Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee
2016-06-01
Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.
Brooks, Matthew; Graham-Kevan, Nicola; Lowe, Michelle; Robinson, Sarita
2017-09-01
The Cognitive Growth and Stress (CGAS) model draws together cognitive processing factors previously untested into a single model. Intrusive rumination, deliberate rumination, present and future perceptions of control, and event centrality were assessed as predictors of post-traumatic growth (PTG) and post-traumatic stress (PTS). The CGAS model is tested on a sample of survivors (N = 250) of a diverse range of adverse events using structural equation modelling techniques. Overall, the best fitting model was supportive of the theorized relations between cognitive constructs and accounted for 30% of the variance in PTG and 68% of the variance in PTS across the sample. Rumination, centrality, and perceived control factors are significant determinants of positive and negative psychological change across the wide spectrum of adversarial events. In its first phase of development, the CGAS model also provides further evidence of the distinct processes of growth and distress following adversity. Clinical implications People can experience positive change after adversity, regardless of life background or types of events experienced. While growth and distress are possible outcomes after adversity, they occur through distinct processes. Support or intervention should consider rumination, event centrality, and perceived control factors to enhance psychological well-being. Cautions/limitations Longitudinal research would further clarify the findings found in this study. Further extension of the model is recommended to include other viable cognitive processes implicated in the development of positive and negative changes after adversity. © 2017 The British Psychological Society.
ERIC Educational Resources Information Center
Jackson, Dan; Bowden, Jack; Baker, Rose
2015-01-01
Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…
An Investigation of the Raudenbush (1988) Test for Studying Variance Heterogeneity.
ERIC Educational Resources Information Center
Harwell, Michael
1997-01-01
The meta-analytic method proposed by S. W. Raudenbush (1988) for studying variance heterogeneity was studied. Results of a Monte Carlo study indicate that the Type I error rate of the test is sensitive to even modestly platykurtic score distributions and to the ratio of study sample size to the number of studies. (SLD)
Cannell, L; Taunton, J; Clement, D; Smith, C; Khan, K
2001-01-01
Objectives—To compare the therapeutic effect of two different exercise protocols in athletes with jumper's knee. Methods—Randomised clinical trial comparing a 12 week programme of either drop squat exercises or leg extension/leg curl exercises. Measurement was performed at baseline and after six and 12 weeks. Primary outcome measures were pain (visual analogue scale 1–10) and return to sport. Secondary outcome measures included quadriceps and hamstring moment of force using a Cybex II isokinetic dynamometer at 30°/second. Differences in pain response between the drop squat and leg extension/curl treatment groups were assessed by 2 (group) x 3 (time) analysis of variance. Two by two contingency tables were used to test differences in rates of return to sport. Analysis of variance (2 (injured versus non-injured leg) x 2 (group) x 3 (time)) was also used to determine differences for secondary outcome measures. Results—Over the 12 week intervention, pain diminished by 2.3 points (36%) in the leg extension/curl group and 3.2 points (57%) in the squat group. There was a significant main effect of both exercise protocols on pain (p<0.01) with no interaction effect. Nine of 10 subjects in the drop squat group returned to sporting activity by 12 weeks, but five of those subjects still had low level pain. Six of nine of the leg extension/curl group returned to sporting activity by 12 weeks and four patients had low level pain. There was no significant difference between groups in numbers returning to sporting activity. There were no differences in the change in quadriceps or hamstring muscle moment of force between groups. Conclusions—Progressive drop squats and leg extension/curl exercises can reduce the pain of jumper's knee in a 12 week period and permit a high proportion of patients to return to sport. Not all patients, however, return to sport by that time. Key Words: knee; patellar tendon; tendinopathy; tendinosis; eccentric strengthening; strength training PMID:11157465
Energy and variance budgets of a diffusive staircase with implications for heat flux scaling
NASA Astrophysics Data System (ADS)
Hieronymus, M.; Carpenter, J. R.
2016-02-01
Diffusive convection, the mode of double-diffusive convection that occur when both temperature and salinity increase with increasing depth, is commonplace throughout the high latitude oceans and diffusive staircases constitute an important heat transport process in the Arctic Ocean. Heat and buoyancy fluxes through these staircases are often estimated using flux laws deduced either from laboratory experiments, or from simplified energy or variance budgets. We have done direct numerical simulations of double-diffusive convection at a range of Rayleigh numbers and quantified the energy and variance budgets in detail. This allows us to compare the fluxes in our simulations to those derived using known flux laws and to quantify how well the simplified energy and variance budgets approximate the full budgets. The fluxes are found to agree well with earlier estimates at high Rayleigh numbers, but we find large deviations at low Rayleigh numbers. The close ties between the heat and buoyancy fluxes and the budgets of thermal variance and energy have been utilized to derive heat flux scaling laws in the field of thermal convection. The result is the so called GL-theory, which has been found to give accurate heat flux scaling laws in a very wide parameter range. Diffusive convection has many similarities to thermal convection and an extension of the GL-theory to diffusive convection is also presented and its predictions are compared to the results from our numerical simulations.
2013-01-01
Background Demographic bottlenecks can severely reduce the genetic variation of a population or a species. Establishing whether low genetic variation is caused by a bottleneck or a constantly low effective number of individuals is important to understand a species’ ecology and evolution, and it has implications for conservation management. Recent studies have evaluated the power of several statistical methods developed to identify bottlenecks. However, the false positive rate, i.e. the rate with which a bottleneck signal is misidentified in demographically stable populations, has received little attention. We analyse this type of error (type I) in forward computer simulations of stable populations having greater than Poisson variance in reproductive success (i.e., variance in family sizes). The assumption of Poisson variance underlies bottleneck tests, yet it is commonly violated in species with high fecundity. Results With large variance in reproductive success (Vk ≥ 40, corresponding to a ratio between effective and census size smaller than 0.1), tests based on allele frequencies, allelic sizes, and DNA sequence polymorphisms (heterozygosity excess, M-ratio, and Tajima’s D test) tend to show erroneous signals of a bottleneck. Similarly, strong evidence of population decline is erroneously detected when ancestral and current population sizes are estimated with the model based method MSVAR. Conclusions Our results suggest caution when interpreting the results of bottleneck tests in species showing high variance in reproductive success. Particularly in species with high fecundity, computer simulations are recommended to confirm the occurrence of a population bottleneck. PMID:24131797
Rice stubble as a new biopolymer source to produce carboxymethyl cellulose-blended films.
Rodsamran, Pattrathip; Sothornvit, Rungsinee
2017-09-01
Rice stubble is agricultural waste consisting of cellulose which can be converted to carboxymethyl cellulose from rice stubble (CMCr) as a potential biomaterial. Plasticizer types (glycerol and olive oil) and their contents were investigated to provide flexibility for use as food packaging material. Glycerol content enhanced extensibility, while olive oil content improved the moisture barrier of films. Additionally, CMCr showed potential as a replacement for up to 50% of commercial CMC without any changes in mechanical and permeability properties. A mixture of plasticizers (10% glycerol and 10% olive oil) provided blended film with good water barrier and mechanical properties comparable with 20% individual plasticizer. Principle component (PC) analysis with 2 PCs explained approximately 81% of the total variance, was a useful tool to select a suitable plasticizer ratio for blended film production. Therefore, CMCr can be used to form edible film and coating as a renewable environmentally friendly packaging material. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Matching Relation and Situation-Specific Bias Modulation in Professional Football Play Selection
Stilling, Stephanie T; Critchfield, Thomas S
2010-01-01
The utility of a quantitative model depends on the extent to which its fitted parameters vary systematically with environmental events of interest. Professional football statistics were analyzed to determine whether play selection (passing versus rushing plays) could be accounted for with the generalized matching equation, and in particular whether variations in play selection across game situations would manifest as changes in the equation's fitted parameters. Statistically significant changes in bias were found for each of five types of game situations; no systematic changes in sensitivity were observed. Further analyses suggested relationships between play selection bias and both turnover probability (which can be described in terms of punishment) and yards-gained variance (which can be described in terms of variable-magnitude reinforcement schedules). The present investigation provides a useful demonstration of association between face-valid, situation-specific effects in a domain of everyday interest, and a theoretically important term of a quantitative model of behavior. Such associations, we argue, are an essential focus in translational extensions of quantitative models. PMID:21119855
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Madison Theresa; Bates, Cameron Russell; Mckigney, Edward Allen
Here, this work presents the organic scintillation simulation capabilities of DRiFT, a post-processing Detector Response Function Toolkit for MCNPR output. DRiFT is used to create realistic scintillation detector response functions to incident neutron and gamma mixed- field radiation. As a post-processing tool, DRiFT leverages the extensively validated radiation transport capabilities of MCNPR ®6, which also provides the ability to simulate complex sources and geometries. DRiFT is designed to be flexible, it allows the user to specify scintillator material, PMT type, applied PMT voltage, and quenching data used in simulations. The toolkit's capabilities, which include the generation of pulse shape discriminationmore » plots and full-energy detector spectra, are demonstrated in a comparison of measured and simulated neutron contributions from 252Cf and PuBe, and photon spectra from 22Na and 228Th sources. DRiFT reproduced energy resolution effects observed in EJ-301 measurements through the inclusion of scintillation yield variances, photon transport noise, and PMT photocathode and multiplication noise.« less
Uncertainty relation based on unbiased parameter estimations
NASA Astrophysics Data System (ADS)
Sun, Liang-Liang; Song, Yong-Shun; Qiao, Cong-Feng; Yu, Sixia; Chen, Zeng-Bing
2017-02-01
Heisenberg's uncertainty relation has been extensively studied in spirit of its well-known original form, in which the inaccuracy measures used exhibit some controversial properties and don't conform with quantum metrology, where the measurement precision is well defined in terms of estimation theory. In this paper, we treat the joint measurement of incompatible observables as a parameter estimation problem, i.e., estimating the parameters characterizing the statistics of the incompatible observables. Our crucial observation is that, in a sequential measurement scenario, the bias induced by the first unbiased measurement in the subsequent measurement can be eradicated by the information acquired, allowing one to extract unbiased information of the second measurement of an incompatible observable. In terms of Fisher information we propose a kind of information comparison measure and explore various types of trade-offs between the information gains and measurement precisions, which interpret the uncertainty relation as surplus variance trade-off over individual perfect measurements instead of a constraint on extracting complete information of incompatible observables.
Organic Scintillator Detector Response Simulations with DRiFT
Andrews, Madison Theresa; Bates, Cameron Russell; Mckigney, Edward Allen; ...
2016-06-11
Here, this work presents the organic scintillation simulation capabilities of DRiFT, a post-processing Detector Response Function Toolkit for MCNPR output. DRiFT is used to create realistic scintillation detector response functions to incident neutron and gamma mixed- field radiation. As a post-processing tool, DRiFT leverages the extensively validated radiation transport capabilities of MCNPR ®6, which also provides the ability to simulate complex sources and geometries. DRiFT is designed to be flexible, it allows the user to specify scintillator material, PMT type, applied PMT voltage, and quenching data used in simulations. The toolkit's capabilities, which include the generation of pulse shape discriminationmore » plots and full-energy detector spectra, are demonstrated in a comparison of measured and simulated neutron contributions from 252Cf and PuBe, and photon spectra from 22Na and 228Th sources. DRiFT reproduced energy resolution effects observed in EJ-301 measurements through the inclusion of scintillation yield variances, photon transport noise, and PMT photocathode and multiplication noise.« less
Organic scintillator detector response simulations with DRiFT
NASA Astrophysics Data System (ADS)
Andrews, M. T.; Bates, C. R.; McKigney, E. A.; Solomon, C. J.; Sood, A.
2016-09-01
This work presents the organic scintillation simulation capabilities of DRiFT, a post-processing Detector Response Function Toolkit for MCNP® output. DRiFT is used to create realistic scintillation detector response functions to incident neutron and gamma mixed-field radiation. As a post-processing tool, DRiFT leverages the extensively validated radiation transport capabilities of MCNP® 6 , which also provides the ability to simulate complex sources and geometries. DRiFT is designed to be flexible, it allows the user to specify scintillator material, PMT type, applied PMT voltage, and quenching data used in simulations. The toolkit's capabilities, which include the generation of pulse shape discrimination plots and full-energy detector spectra, are demonstrated in a comparison of measured and simulated neutron contributions from 252Cf and PuBe, and photon spectra from 22Na and 228Th sources. DRiFT reproduced energy resolution effects observed in EJ-301 measurements through the inclusion of scintillation yield variances, photon transport noise, and PMT photocathode and multiplication noise.
Wheelwright, Nathaniel T; Keller, Lukas F; Postma, Erik
2014-11-01
The heritability (h(2) ) of fitness traits is often low. Although this has been attributed to directional selection having eroded genetic variation in direct proportion to the strength of selection, heritability does not necessarily reflect a trait's additive genetic variance and evolutionary potential ("evolvability"). Recent studies suggest that the low h(2) of fitness traits in wild populations is caused not by a paucity of additive genetic variance (VA ) but by greater environmental or nonadditive genetic variance (VR ). We examined the relationship between h(2) and variance-standardized selection intensities (i or βσ ), and between evolvability (IA :VA divided by squared phenotypic trait mean) and mean-standardized selection gradients (βμ ). Using 24 years of data from an island population of Savannah sparrows, we show that, across diverse traits, h(2) declines with the strength of selection, whereas IA and IR (VR divided by squared trait mean) are independent of the strength of selection. Within trait types (morphological, reproductive, life-history), h(2) , IA , and IR are all independent of the strength of selection. This indicates that certain traits have low heritability because of increased residual variance due to the age at which they are expressed or the multiple factors influencing their expression, rather than their association with fitness. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Zeng, R.; Cai, X.
2016-12-01
Irrigation has considerably interfered with hydrological processes in arid and semi-arid areas with heavy irrigated agriculture. With the increasing demand for food production and evaporative demand due to climate change, irrigation water consumption is expected to increase, which would aggravate the interferences to hydrologic processes. Current studies focus on the impact of irrigation on the mean value of evapotranspiration (ET) at either local or regional scale, however, how irrigation changes the variability of ET has not been well understood. This study analyzes the impact of extensive irrigation on ET variability in the Northern High Plains. We apply an ET variance decomposition framework developed from our previous work to quantify the effects of both climate and irrigation on ET variance in the Northern High Plains watersheds. Based on climate and water table observations, we assess the monthly ET variance and its components for two periods: 1930s-1960s with less irrigation development 970s-2010s with more development. It is found that irrigation not only caused the well-recognized groundwater drawdown and stream depletion problems in the region, but also buffered ET variance from climatic fluctuations. In addition to increasing food productivity, irrigation also stabilizes crop yield by mitigating the impact of hydroclimatic variability. With complementary water supply from irrigation, ET often approaches to the potential ET, and thus the observed ET variance is more attributed to climatic variables especially temperature; meanwhile irrigation causes significant seasonal fluctuations to groundwater storage. For sustainable water resources management in the Northern High Plains, we argue that both the mean value and the variance of ET should be considered together for the regulation of irrigation in this region.
Validating Variance Similarity Functions in the Entrainment Zone
NASA Astrophysics Data System (ADS)
Osman, M.; Turner, D. D.; Heus, T.; Newsom, R. K.
2017-12-01
In previous work, the water vapor variance in the entrainment zone was proposed to be proportional to the convective velocity scale, gradient water vapor mixing ratio and the Brunt-Vaisala frequency in the interfacial layer, while the variance of the vertical wind at in the entrainment zone was defined in terms of the convective velocity scale. The variances in the entrainment zone have been hypothesized to depend on two distinct functions, which also depend on the Richardson number. To the best of our knowledge, these hypotheses have never been tested observationally. Simultaneous measurements of the Eddy correlation surface flux, wind shear profiles from wind profilers, and variance profile measurements of vertical motions and water vapor by Doppler and Raman lidars, respectively, provide a unique opportunity to thoroughly examine the functions used in defining the variances and validate them. These observations were made over the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site. We have identified about 30 cases from 2016 during which the convective boundary layer (CBL) is quasi-stationary and well mixed for at least 2 hours. The vertical profiles of turbulent fluctuations of the vertical wind and water vapor have been derived using an auto covariance technique to separate out the instrument random error to a set of 2-h period time series. The error analysis of the lidars observations demonstrates that the lidars are capable of resolving the vertical structure of turbulence around the entrainment zone. Therefore, utilizing this unique combination of observations, this study focuses on extensively testing the hypotheses that the second-order moments are indeed proportional to the functions which also depend on Richardson number. The coefficients that are used in defining the functions will also be determined observationally and compared against with the values suggested by Large eddy simulation (LES) studies.
NASA Astrophysics Data System (ADS)
Asanuma, Jun
Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original hypothesis by Panofsky and McCormick that the local scaling in terms of the local buoyancy flux defines the lower bound of the moments.
Use of lodgepole pine cover types by Yellowstone grizzly bears
Mattson, D.J.
1997-01-01
Lodgepole pine (Pinus contorta) forests are a large and dynamic part of grizzly bear (Ursus arctos) habitat in the Yellowstone ecosystem. Research in other areas suggests that grizzly bears select for young open forest stands, especially for grazing and feeding on berries. Management guidelines accordingly recommend timber harvest as a technique for improving habitat in areas potentially dominated by lodgepole pine. In this paper I examine grizzly bear use of lodgepole pine forests in the Yellowstone area, and test several hypotheses with relevance to a new generation of management guidelines. Differences in grizzly bear selection of lodgepole pine cover types (defined on the basis of stand age and structure) were not pronounced. Selection furthermore varied among years, areas, and individuals. Positive selection for any lodgepole pine type was uncommon. Estimates of selection took 5-11 years or 4-12 adult females to stabilize, depending upon the cover type. The variances of selection estimates tended to stabilize after 3-5 sample years, and were more-or-less stable to slightly increasing with progressively increased sample area. There was no conclusive evidence that Yellowstone's grizzlies favored young (<40 yr) stands in general or for their infrequent use of berries. On the other hand, these results corroborated previous observations that grizzlies favored open and/or young stands on wet and fertile sites for grazing. These results also supported the proposition that temporally and spatially robust inferences require extensive, long-duration studies, especially for wide-ranging vertebrates like grizzly bears.
Early Separation Incentives: An Analysis of Survey Data and Reenlistment Decision-Making Models
1993-05-01
Table 8 Analyses of Variance for COL by Race and Gender (Enlisted Personnel) SOURCE 2F TYPE III SS MEAN SQUARE F VALUE PR > F SEX 1 80219.5873...SOURCE TYPEIII MEAN SUARE F VALUE PR >F SEX 1 82779.4582 82779.4582 1.04 0.3074 RACE 4 202214.2787 50553.5697 0.64 0.6362 Note: R-Square= 0.004141 12...Table 10 Analyses of Variance for COL by Race and Gender (Commissioned officers) SOURCE DF TYPE III SS MEAN SQUARE F VALUE PR > F SEX 1 1843343.079
Meta-analysis with missing study-level sample variance data.
Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P
2016-07-30
We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Noise and drift analysis of non-equally spaced timing data
NASA Technical Reports Server (NTRS)
Vernotte, F.; Zalamansky, G.; Lantz, E.
1994-01-01
Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.
Patterns of Violence Exposure and Sexual Risk in Low-Income, Urban African American Girls
Wilson, Helen W.; Woods, Briana A.; Emerson, Erin; Donenberg, Geri R.
2013-01-01
Objective This study examined the relationship between violence exposure and sexual risk-taking among low-income, urban African American (AA) adolescent girls, considering overlap among different types and characteristics of violence. Methods AA adolescent girls were originally recruited from outpatient mental health clinics serving urban, mostly low-SES communities in Chicago, IL as part of a two-year longitudinal investigation of HIV-risk behavior. A subsequent follow-up was completed to assess lifetime history of trauma and violence exposure. The current study (N=177) included violence exposure and sexual risk behavior reported at the most recent interview (ages 14-22). Multiple regression was used to examine combined and unique contributions of different types, ages, settings, and perpetrators or victims of violence to variance in sexual risk. Results More extensive violence exposure and cumulative exposure to different kinds of violence were associated with overall unsafe sex, more partners, and inconsistent condom use. The most significant unique predictors, accounting for overlap among different forms of violence, were physical victimization, adolescent exposure, neighborhood violence, and violence involving dating partners. Conclusions These findings put sexual risk in the context of broad traumatic experiences but also suggest that the type and characteristics of violence exposure matter in terms of sexual health outcomes. Violence exposure should be addressed in efforts to reduce STIs among low-income, urban African American girls. PMID:24563808
Super-delta: a new differential gene expression analysis procedure with robust data normalization.
Liu, Yuhang; Zhang, Jinfeng; Qiu, Xing
2017-12-21
Normalization is an important data preparation step in gene expression analyses, designed to remove various systematic noise. Sample variance is greatly reduced after normalization, hence the power of subsequent statistical analyses is likely to increase. On the other hand, variance reduction is made possible by borrowing information across all genes, including differentially expressed genes (DEGs) and outliers, which will inevitably introduce some bias. This bias typically inflates type I error; and can reduce statistical power in certain situations. In this study we propose a new differential expression analysis pipeline, dubbed as super-delta, that consists of a multivariate extension of the global normalization and a modified t-test. A robust procedure is designed to minimize the bias introduced by DEGs in the normalization step. The modified t-test is derived based on asymptotic theory for hypothesis testing that suitably pairs with the proposed robust normalization. We first compared super-delta with four commonly used normalization methods: global, median-IQR, quantile, and cyclic loess normalization in simulation studies. Super-delta was shown to have better statistical power with tighter control of type I error rate than its competitors. In many cases, the performance of super-delta is close to that of an oracle test in which datasets without technical noise were used. We then applied all methods to a collection of gene expression datasets on breast cancer patients who received neoadjuvant chemotherapy. While there is a substantial overlap of the DEGs identified by all of them, super-delta were able to identify comparatively more DEGs than its competitors. Downstream gene set enrichment analysis confirmed that all these methods selected largely consistent pathways. Detailed investigations on the relatively small differences showed that pathways identified by super-delta have better connections to breast cancer than other methods. As a new pipeline, super-delta provides new insights to the area of differential gene expression analysis. Solid theoretical foundation supports its asymptotic unbiasedness and technical noise-free properties. Implementation on real and simulated datasets demonstrates its decent performance compared with state-of-art procedures. It also has the potential of expansion to be incorporated with other data type and/or more general between-group comparison problems.
Yang, Yi; Tokita, Midori; Ishiguchi, Akira
2018-01-01
A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed.
Detection of gene-environment interaction in pedigree data using genome-wide genotypes.
Nivard, Michel G; Middeldorp, Christel M; Lubke, Gitta; Hottenga, Jouke-Jan; Abdellaoui, Abdel; Boomsma, Dorret I; Dolan, Conor V
2016-12-01
Heritability may be estimated using phenotypic data collected in relatives or in distantly related individuals using genome-wide single nucleotide polymorphism (SNP) data. We combined these approaches by re-parameterizing the model proposed by Zaitlen et al and extended this model to include moderation of (total and SNP-based) genetic and environmental variance components by a measured moderator. By means of data simulation, we demonstrated that the type 1 error rates of the proposed test are correct and parameter estimates are accurate. As an application, we considered the moderation by age or year of birth of variance components associated with body mass index (BMI), height, attention problems (AP), and symptoms of anxiety and depression. The genetic variance of BMI was found to increase with age, but the environmental variance displayed a greater increase with age, resulting in a proportional decrease of the heritability of BMI. Environmental variance of height increased with year of birth. The environmental variance of AP increased with age. These results illustrate the assessment of moderation of environmental and genetic effects, when estimating heritability from combined SNP and family data. The assessment of moderation of genetic and environmental variance will enhance our understanding of the genetic architecture of complex traits.
Association of Psoriasis With the Risk for Type 2 Diabetes Mellitus and Obesity.
Lønnberg, Ann Sophie; Skov, Lone; Skytthe, Axel; Kyvik, Kirsten Ohm; Pedersen, Ole Birger; Thomsen, Simon Francis
2016-07-01
Psoriasis has been shown to be associated with overweight and type 2 diabetes mellitus. The genetic association is unclear. To examine the association among psoriasis, type 2 diabetes mellitus, and body mass index (BMI) (calculated as weight in kilograms divided by height in meters squared) in twins. This cross-sectional, population-based twin study included 34 781 Danish twins, 20 to 71 years of age. Data from a questionnaire on psoriasis was validated against hospital discharge diagnoses of psoriasis and compared with hospital discharge diagnoses of type 2 diabetes mellitus and self-reported BMI. Data were collected in the spring of 2002. Data were analyzed from January 1 to October 31, 2014. Crude and adjusted odds ratios (ORs) were calculated for psoriasis in relation to type 2 diabetes mellitus, increasing BMI, and obesity in the whole population of twins and in 449 psoriasis-discordant twins. Variance component analysis was used to measure genetic and nongenetic effects on the associations. Among the 34 781 questionnaire respondents, 33 588 with complete data were included in the study (15 443 men [46.0%]; 18 145 women [54.0%]; mean [SD] age, 44.5 [7.6] years). After multivariable adjustment, a significant association was found between psoriasis and type 2 diabetes mellitus (odds ratio [OR], 1.53; 95% CI, 1.03-2.27; P = .04) and between psoriasis and increasing BMI (OR, 1.81; 95% CI, 1.28-2.55; P = .001 in individuals with a BMI>35.0). Among psoriasis-discordant twin pairs, the association between psoriasis and obesity was diluted in monozygotic twins (OR, 1.43; 95% CI, 0.50-4.07; P = .50) relative to dizygotic twins (OR, 2.13; 95% CI, 1.03-4.39; P = .04). Variance decomposition showed that additive genetic factors accounted for 68% (95% CI, 60%-75%) of the variance in the susceptibility to psoriasis, for 73% (95% CI, 58%-83%) of the variance in susceptibility to type 2 diabetes mellitus, and for 74% (95% CI, 72%-76%) of the variance in BMI. The genetic correlation between psoriasis and type 2 diabetes mellitus was 0.13 (-0.06 to 0.31; P = .17); between psoriasis and BMI, 0.12 (0.08 to 0.19; P < .001). The environmental correlation between psoriasis and type 2 diabetes mellitus was 0.10 (-0.71 to 0.17; P = .63); between psoriasis and BMI, -0.05 (-0.14 to 0.04; P = .44). This study determines the contribution of genetic and environmental factors to the interaction between obesity, type 2 diabetes mellitus, and psoriasis. Psoriasis, type 2 diabetes mellitus, and obesity are also strongly associated in adults after taking key confounding factors, such as sex, age, and smoking, into account. Results indicate a common genetic etiology for psoriasis and obesity.
2009-04-01
triplicate and results were averaged. MS Detection A master mix was created consisting of 9 parts matrix solution (alpha-cyano-4-hydroxy cinnamic acid ...thus, do not inhibit the catalytic activity. Another feature of BoNT/A is that it exhibits genetic and amino acid variance within the toxin type, or...less amino acid variance [23] and this variance has been reported to affect binding of the toxin to anti-BoNT/A mAbs [24]. For these reasons, it is
A flexible model for the mean and variance functions, with application to medical cost data.
Chen, Jinsong; Liu, Lei; Zhang, Daowen; Shih, Ya-Chen T
2013-10-30
Medical cost data are often skewed to the right and heteroscedastic, having a nonlinear relation with covariates. To tackle these issues, we consider an extension to generalized linear models by assuming nonlinear associations of covariates in the mean function and allowing the variance to be an unknown but smooth function of the mean. We make no further assumption on the distributional form. The unknown functions are described by penalized splines, and the estimation is carried out using nonparametric quasi-likelihood. Simulation studies show the flexibility and advantages of our approach. We apply the model to the annual medical costs of heart failure patients in the clinical data repository at the University of Virginia Hospital System. Copyright © 2013 John Wiley & Sons, Ltd.
Robust analysis of semiparametric renewal process models
Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.
2013-01-01
Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568
Didarloo, A R; Shojaeizadeh, D; Gharaaghaji Asl, R; Habibzadeh, H; Niknami, Sh; Pourali, R
2012-02-01
Continuous performing of diabetes self-care behaviors was shown to be an effective strategy to control diabetes and to prevent or reduce its- related complications. This study aimed to investigate predictors of self-care behavior based on the extended theory of reasoned action by self efficacy (ETRA) among women with type 2 diabetes in Iran. A sample of 352 women with type 2 diabetes, referring to a Diabetes Clinic in Khoy, Iran using the nonprobability sampling was enrolled. Appropriate instruments were designed to measure the variables of interest (diabetes knowledge, personal beliefs, subjective norm, self-efficacy and behavioral intention along with self- care behaviors). Reliability and validity of the instruments using Cronbach's alpha coefficients (the values of them were more than 0.70) and a panel of experts were tested. A statistical significant correlation existed between independent constructs of proposed model and modelrelated dependent constructs, as ETRA model along with its related external factors explained 41.5% of variance of intentions and 25.3% of variance of actual behavior. Among constructs of model, self-efficacy was the strongest predictor of intentions among women with type 2 diabetes, as it lonely explained 31.3% of variance of intentions and 11.4% of variance of self-care behavior. The high ability of the extended theory of reasoned action with self-efficacy in forecasting and explaining diabetes mellitus self management can be a base for educational intervention. So to improve diabetes self management behavior and to control the disease, use of educational interventions based on proposed model is suggested.
Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M
2018-04-01
The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.
ERIC Educational Resources Information Center
Neel, John H.; Stallings, William M.
An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…
The microcomputer scientific software series 3: general linear model--analysis of variance.
Harold M. Rauscher
1985-01-01
A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...
Dimensions of patient satisfaction with comprehensive abortion care in Addis Ababa, Ethiopia.
Mossie Chekol, Bekalu; Abera Abdi, Dame; Andualem Adal, Tamirie
2016-12-07
Patient satisfaction is a measure of the extent to which a patient is content with the health care received from health care providers. It has been recognized as one of the most vital indicators of quality. Hence, it has been studied and measured extensively as part of service quality and as a standalone construct. In spite of this, there has been limited or no studies in Ethiopia that describe factors of abortion care contributed to women's satisfaction. This study aimed to identifying the underlying factors that contribute to patient satisfaction with comprehensive abortion care and at exploring relationships between total satisfaction scores and socio-demographic and care-related variables in Addis Ababa, Ethiopia. At the beginning of the study in-depth interviews with 16 participants and a focus group discussion of 8 participants were conducted consecutively at the time of discharge to generate questions used to evaluate women's satisfaction with abortion care. Following generation of the perceived indicators, expert review, pilot study, and item analysis were performed in order to produce the reduced and better 26 items used to measure abortion care satisfaction. A total sample size of 450 participants from eight health facilities completed the survey. Principal component exploratory factor analysis and confirmatory factor analysis were conducted respectively to identify and confirm the factors of abortion care contributing to women's satisfaction. Mean satisfaction scores were compared across socio demographic and care-related variables such as age, educational level, gestational age (first trimester and second trimester), and facility type using analysis of variance. Exploratory factor analysis of the 26 items indicated that satisfaction with abortion care consisted of five main components accounting for 60.48% of the variance in total satisfaction scores. Factor loadings of all items were found to be greater than 0.4. These factors are named as follows: "art of care" which means interpersonal relationships with the care-provider, "physical environment" which means the perceived quality of physical surroundings in which care is delivered, including cleanliness of facilities and equipment, "information" which means the information received related to abortion procedures, "privacy and confidentiality", "quality of care" which refers to technical quality of the care provider. Furthermore, analysis of variance showed that overall satisfaction is found to be related to facility type, relationship status, gestational age, and procedural type. The findings provided support that women's satisfaction with comprehensive abortion care has five major factors. Therefore, to improve the overall quality of comprehensive abortion care, attention should be given to the advancement of these components namely, positive interpersonal communication with care-receiver, pleasantness of physical environment, offering enough information related to the procedure, securing clients' privacy during counseling and treatment, and technical quality of the providers.
Mcalister, Courtney; Schmitter-Edgecombe, Maureen; Lamb, Richard
2016-01-01
The objective of this meta-analysis was to improve understanding of the heterogeneity in the relationship between cognition and functional status in individuals with mild cognitive impairment (MCI). Demographic, clinical, and methodological moderators were examined. Cognition explained an average of 23% of the variance in functional outcomes. Executive function measures explained the largest amount of variance (37%), whereas global cognitive status and processing speed measures explained the least (20%). Short- and long-delayed memory measures accounted for more variance (35% and 31%) than immediate memory measures (18%), and the relationship between cognition and functional outcomes was stronger when assessed with informant-report (28%) compared with self-report (21%). Demographics, sample characteristics, and type of everyday functioning measures (i.e., questionnaire, performance-based) explained relatively little variance compared with cognition. Executive functioning, particularly measured by Trails B, was a strong predictor of everyday functioning in individuals with MCI. A large proportion of variance remained unexplained by cognition. PMID:26743326
Structural changes and out-of-sample prediction of realized range-based variance in the stock market
NASA Astrophysics Data System (ADS)
Gong, Xu; Lin, Boqiang
2018-03-01
This paper aims to examine the effects of structural changes on forecasting the realized range-based variance in the stock market. Considering structural changes in variance in the stock market, we develop the HAR-RRV-SC model on the basis of the HAR-RRV model. Subsequently, the HAR-RRV and HAR-RRV-SC models are used to forecast the realized range-based variance of S&P 500 Index. We find that there are many structural changes in variance in the U.S. stock market, and the period after the financial crisis contains more structural change points than the period before the financial crisis. The out-of-sample results show that the HAR-RRV-SC model significantly outperforms the HAR-BV model when they are employed to forecast the 1-day, 1-week, and 1-month realized range-based variances, which means that structural changes can improve out-of-sample prediction of realized range-based variance. The out-of-sample results remain robust across the alternative rolling fixed-window, the alternative threshold value in ICSS algorithm, and the alternative benchmark models. More importantly, we believe that considering structural changes can help improve the out-of-sample performances of most of other existing HAR-RRV-type models in addition to the models used in this paper.
Detection of gene–environment interaction in pedigree data using genome-wide genotypes
Nivard, Michel G; Middeldorp, Christel M; Lubke, Gitta; Hottenga, Jouke-Jan; Abdellaoui, Abdel; Boomsma, Dorret I; Dolan, Conor V
2016-01-01
Heritability may be estimated using phenotypic data collected in relatives or in distantly related individuals using genome-wide single nucleotide polymorphism (SNP) data. We combined these approaches by re-parameterizing the model proposed by Zaitlen et al and extended this model to include moderation of (total and SNP-based) genetic and environmental variance components by a measured moderator. By means of data simulation, we demonstrated that the type 1 error rates of the proposed test are correct and parameter estimates are accurate. As an application, we considered the moderation by age or year of birth of variance components associated with body mass index (BMI), height, attention problems (AP), and symptoms of anxiety and depression. The genetic variance of BMI was found to increase with age, but the environmental variance displayed a greater increase with age, resulting in a proportional decrease of the heritability of BMI. Environmental variance of height increased with year of birth. The environmental variance of AP increased with age. These results illustrate the assessment of moderation of environmental and genetic effects, when estimating heritability from combined SNP and family data. The assessment of moderation of genetic and environmental variance will enhance our understanding of the genetic architecture of complex traits. PMID:27436263
Yang, Yi; Tokita, Midori; Ishiguchi, Akira
2018-01-01
A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed. PMID:29399318
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
Polarization in Raman spectroscopy helps explain bone brittleness in genetic mouse models
NASA Astrophysics Data System (ADS)
Makowski, Alexander J.; Pence, Isaac J.; Uppuganti, Sasidhar; Zein-Sabatto, Ahbid; Huszagh, Meredith C.; Mahadevan-Jansen, Anita; Nyman, Jeffry S.
2014-11-01
Raman spectroscopy (RS) has been extensively used to characterize bone composition. However, the link between bone biomechanics and RS measures is not well established. Here, we leveraged the sensitivity of RS polarization to organization, thereby assessing whether RS can explain differences in bone toughness in genetic mouse models for which traditional RS peak ratios are not informative. In the selected mutant mice-activating transcription factor 4 (ATF4) or matrix metalloproteinase 9 (MMP9) knock-outs-toughness is reduced but differences in bone strength do not exist between knock-out and corresponding wild-type controls. To incorporate differences in the RS of bone occurring at peak shoulders, a multivariate approach was used. Full spectrum principal components analysis of two paired, orthogonal bone orientations (relative to laser polarization) improved genotype classification and correlation to bone toughness when compared to traditional peak ratios. When applied to femurs from wild-type mice at 8 and 20 weeks of age, the principal components of orthogonal bone orientations improved age classification but not the explanation of the maturation-related increase in strength. Overall, increasing polarization information by collecting spectra from two bone orientations improves the ability of multivariate RS to explain variance in bone toughness, likely due to polarization sensitivity to organizational changes in both mineral and collagen.
Operational Consequences of Literacy Gap.
1980-05-01
Comprehension Scores on the Safety and Sanitation Content 37 11. Statistics on Experimental Groups’ Performance by Sex and Content 37 12. Analysis of...Variance of Experimental Groups by Sex and Content 38 13. Mean Comprehension Scores Broken Down by Content, Subject RGL and Reading Time 39 14. Analysis...ratings along a scale of difficulty which parallels the school grade scale. Burkett (1975) and Klare (1963; 1974-1975) provide summaries of the extensive
ERIC Educational Resources Information Center
Nyroos, Mikaela; Korhonen, Johan; Peng, Aihui; Linnanmäki, Karin; Svens-Liavåg, Camilla; Bagger, Anette; Sjöberg, Gunnar
2015-01-01
While test anxiety has been studied extensively, little consideration has been given to the cultural impacts of children's experiences and expressions of test anxiety. The aim of this work was to examine whether variance in test anxiety scores can be predicted based on gender and cultural setting. Three hundred and ninety-eight pupils in Grade 3…
Extensions of output variance constrained controllers to hard constraints
NASA Technical Reports Server (NTRS)
Skelton, R.; Zhu, G.
1989-01-01
Covariance Controllers assign specified matrix values to the state covariance. A number of robustness results are directly related to the covariance matrix. The conservatism in known upperbounds on the H infinity, L infinity, and L (sub 2) norms for stability and disturbance robustness of linear uncertain systems using covariance controllers is illustrated with examples. These results are illustrated for continuous and discrete time systems. **** ONLY 2 BLOCK MARKERS FOUND -- RETRY *****
Interactive effects of cumulative stress and impulsivity on alcohol consumption.
Fox, Helen C; Bergquist, Keri L; Peihua, Gu; Rajita, Sinha
2010-08-01
Alcohol addiction may reflect adaptations to stress, reward, and regulatory brain systems. While extensive research has identified both stress and impulsivity as independent risk factors for drinking, few studies have assessed the interactive relationship between stress and impulsivity in terms of hazardous drinking within a community sample of regular drinkers. One hundred and thirty regular drinkers (56M/74F) from the local community were assessed for hazardous and harmful patterns of alcohol consumption using the Alcohol Use Disorders Identification Test (AUDIT). All participants were also administered the Barratt Impulsiveness Scale (BIS-11) as a measure of trait impulsivity and the Cumulative Stress/Adversity Checklist (CSC) as a comprehensive measure of cumulative adverse life events. Standard multiple regression models were used to ascertain the independent and interactive nature of both overall stress and impulsivity as well as specific types of stress and impulsivity on hazardous and harmful drinking. Recent life stress, cumulative traumatic stress, overall impulsivity, and nonplanning-related impulsivity as well as cognitive and motor-related impulsivity were all independently predictive of AUDIT scores. However, the interaction between cumulative stress and total impulsivity scores accounted for a significant amount of the variance, indicating that a high to moderate number of adverse events and a high trait impulsivity rating interacted to affect greater AUDIT scores. The subscale of cumulative life trauma accounted for the most variance in AUDIT scores among the stress and impulsivity subscales. Findings highlight the interactive relationship between stress and impulsivity with regard to hazardous drinking. The specific importance of cumulative traumatic stress as a marker for problem drinking is also discussed.
Herrera, Carlos M; Alonso, Conchita; Medrano, Mónica; Pérez, Ricardo; Bazaga, Pilar
2018-04-01
The ecological and evolutionary significance of natural epigenetic variation (i.e., not based on DNA sequence variants) variation will depend critically on whether epigenetic states are transmitted from parents to offspring, but little is known on epigenetic inheritance in nonmodel plants. We present a quantitative analysis of transgenerational transmission of global DNA cytosine methylation (= proportion of all genomic cytosines that are methylated) and individual epigenetic markers (= methylation status of anonymous MSAP markers) in the shrub Lavandula latifolia. Methods based on parent-offspring correlations and parental variance component estimation were applied to epigenetic features of field-growing plants ('maternal parents') and greenhouse-grown progenies. Transmission of genetic markers (AFLP) was also assessed for reference. Maternal parents differed significantly in global DNA cytosine methylation (range = 21.7-36.7%). Greenhouse-grown maternal families differed significantly in global methylation, and their differences were significantly related to maternal origin. Methylation-sensitive amplified polymorphism (MSAP) markers exhibited significant transgenerational transmission, as denoted by significant maternal variance component of marker scores in greenhouse families and significant mother-offspring correlations of marker scores. Although transmission-related measurements for global methylation and MSAP markers were quantitatively lower than those for AFLP markers taken as reference, this study has revealed extensive transgenerational transmission of genome-wide global cytosine methylation and anonymous epigenetic markers in L. latifolia. Similarity of results for global cytosine methylation and epigenetic markers lends robustness to this conclusion, and stresses the value of considering both types of information in epigenetic studies of nonmodel plants. © 2018 Botanical Society of America.
A powerful and flexible approach to the analysis of RNA sequence count data.
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A
2011-10-01
A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.
Gallina, Alessio; Garland, S Jayne; Wakeling, James M
2018-05-22
In this study, we investigated whether principal component analysis (PCA) and non-negative matrix factorization (NMF) perform similarly for the identification of regional activation within the human vastus medialis. EMG signals from 64 locations over the VM were collected from twelve participants while performing a low-force isometric knee extension. The envelope of the EMG signal of each channel was calculated by low-pass filtering (8 Hz) the monopolar EMG signal after rectification. The data matrix was factorized using PCA and NMF, and up to 5 factors were considered for each algorithm. Association between explained variance, spatial weights and temporal scores between the two algorithms were compared using Pearson correlation. For both PCA and NMF, a single factor explained approximately 70% of the variance of the signal, while two and three factors explained just over 85% or 90%. The variance explained by PCA and NMF was highly comparable (R > 0.99). Spatial weights and temporal scores extracted with non-negative reconstruction of PCA and NMF were highly associated (all p < 0.001, mean R > 0.97). Regional VM activation can be identified using high-density surface EMG and factorization algorithms. Regional activation explains up to 30% of the variance of the signal, as identified through both PCA and NMF. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hühn, M; Lotito, S; Piepho, H P
1993-09-01
Multilocation trials in plant breeding lead to cross-classified data sets with rows=genotypes and columns=environments, where the breeder is particularly interested in the rank orders of the genotypes in the different environments. Non-identical rank orders are the result of genotype x environment interactions. Not every interaction, however, causes rank changes among the genotypes (rank-interaction). From a breeder's point of view, interaction is tolerable only as long as it does not affect the rank orders. Therefore, the question arises of under which circumstances does interaction become rank-interaction. This paper contributes to our understanding of this topic. In our study we emphasized the detection of relationships between the similarity of the rank orders (measured by Kendall's coefficient of concordance W) and the functions of the diverse variance components (genotypes, environments, interaction, error). On the basis of extensive data sets on different agricultural crops (faba bean, fodder beet, sugar beet, oats, winter rape) obtained from registration trials (1985-1989) carried out in the Federal Republic of Germany, we obtained the following as main result: W ≅ σ 2 (g) /(σ 2 (g) + σ 2 (v) ) where σ 2 (g) =genotypic variance and σ 2 (v) = σ 2 (ge) + σ 2 (o) /L with σ 2 (ge) =interaction variance, σ 2 (o) =error variance and L=number of replications.
Branscum, Paul; Sharma, Manoj
This study examined the extent to which constructs of the theory of planned behavior (TPB) can predict the consumption of two types of snack foods among elementary school children. A 15-item instrument tested for validity and reliability measuring TPB constructs was developed and administered to 167 children. Snack foods were evaluated using a modified 24-hour recall method. On average, children consumed 302 calories from snack foods per day. Stepwise multiple regression found that attitudes, subjective norms, and perceived control accounted for 44.7% of the variance for intentions. Concurrently, intentions accounted for 11.3% of the variance for calorically-dense snack food consumption and 8.9% of the variance for fruit and vegetable snack consumption. Results suggest that the theory of planned behavior is an efficacious theory for these two behaviors. Future interventions should consider using this theoretical framework and aim to enhance children's attitudes, perceived control, and subjective norms towards snack food consumption.
Damen, Nikki L; Versteeg, Henneke; van Helmondt, Sanne J; de Jaegere, Peter P; van Geuns, Robert-Jan M; Meine, Mathias M; van Domburg, Ron T; Pedersen, Susanne S
2014-01-01
Both the distressed (Type D) personality (i.e. the combination of negative affectivity and social inhibition traits) and dysfunctional parenting styles are associated with anxiety and depression. As parenting styles have been related to personality development, dysfunctional parenting styles may also be associated with Type D personality. We examined whether remembered parenting was associated with anxiety and depression in cardiac patients and whether Type D personality mediated this relationship. Our sample comprised 435 patients treated with percutaneous coronary intervention (PCI) and 123 patients with congestive heart failure (CHF). Patients completed the Hospital Anxiety and Depression Scale, Type D Scale (DS14), and Remembered Relationship with Parents (RRP(10)) scale. Remembered parenting was significantly associated with higher anxiety and depression levels and Type D personality. In multivariable linear regression analyses, Type D personality accounted for 25-29% of the variance in anxiety and 23-46% of the variance in depression, while remembered parenting was no longer significantly associated with these domains. Sobel tests and bootstrapping indicated that Type D personality mediated the relationship between remembered parenting and anxiety and depression. Type D personality mediated the relationship between remembered parenting and anxiety and depression in both PCI and CHF patients.
Tinker, M. Tim; Estes, James A.; Staedler, Michelle; Bodkin, James L.; Tinker, M. Tim; Estes, James A.; Ralls, Katherine; Williams, Terrie M.; Jessup, David A.; Costa, Daniel P.
2006-01-01
Longitudinal foraging data collected from 60 sea otters implanted with VHF radio transmitters at two study sites in Central California over a three-year period demonstrated even greater individual dietary specialization than in previous studies, with only 54% dietary overlap between individuals and the population.Multivariate statistical analyses indicated that individual diets could be grouped into three general "diet types" representing distinct foraging specializations. Type 1 specialists consumed large size prey but had low dive efficiency, Type 2 specialists consumed small to medium size prey with high dive efficiency, and Type 3 specialists consumed very small prey (mainly snails) with very high dive efficiency.The mean rate of energy gain for the population as a whole was low when compared to other sea otter populations in Alaska but showed a high degree of within- and betweenindividual variation, much of which was accounted for by the three foraging strategies. Type 1 specialists had the highest mean energy gain but also the highest withinindividual variance in energy gain. Type 2 specialists had the lowest mean energy gain but also the lowest variance. Type 3 specialists had an intermediate mean and variance. All three strategies resulted in very similar probabilities of exceeding a critical rate of energy gain on any given day.Correlational selection may help maintain multiple foraging strategies in the population: a fitness surface (using mean rate of energy gain as a proxy for fitness) fit to the first two principal components of foraging behavior suggested that the three foraging strategies occupy separate fitness peaks.Food limitation is likely an important ultimate factor restricting population growth in the center of the population’s range in California, although the existence of alternative foraging strategies results in different impacts of food limitation on individuals and thus may obscure expected patterns of density dependence.
77 FR 56708 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-13
...) OMB Number: 1535-0004. Type of Review: Extension without change of a currently approved collection.... OMB Number: 1535-0014. Type of Review: Extension without change of a currently approved collection... Households. Estimated Total Burden Hours: 460. OMB Number: 1535-0015. Type of Review: Extension without...
Food addiction associations with psychological distress among people with type 2 diabetes.
Raymond, Karren-Lee; Lovell, Geoff P
2016-01-01
To assess the relationship between a food addiction (FA) model and psychological distress among a type 2 diabetes (t2d) sample. A cross-sectional study of 334 participants with t2d diagnoses were invited to complete a web-based questionnaire. We measured variables of psychological distress implementing the Depression Anxiety and Stress Scale (DASS-21), the Yale Food Addiction Scale (YFAS), and other factors associated with t2d. In our study a novel finding highlighted people with t2d meeting the FA criterion had significantly higher depression, anxiety, and stress scores as compared to participants who did not meet the FA criterion. Moreover, FA symptomology explained 35% of the unique variance in depression scores, 34% of the unique variance in anxiety scores, and 34% of the unique variance in stress scores, while surprisingly, BMI explained less than 1% of the unique variance in scores. We identified that psychological distress among people with t2d was associated with the FA model, apparently more so than BMI, thereby indicating further research being necessary lending support for future research in this realm. Moreover the FA model may be beneficial when addressing treatment approaches for psychological distress among people with t2d. Copyright © 2016 Elsevier Inc. All rights reserved.
Quantum Theory of Three-Dimensional Superresolution Using Rotating-PSF Imagery
NASA Astrophysics Data System (ADS)
Prasad, S.; Yu, Z.
The inverse of the quantum Fisher information (QFI) matrix (and extensions thereof) provides the ultimate lower bound on the variance of any unbiased estimation of a parameter from statistical data, whether of intrinsically quantum mechanical or classical character. We calculate the QFI for Poisson-shot-noise-limited imagery using the rotating PSF that can localize and resolve point sources fully in all three dimensions. We also propose an experimental approach based on the use of computer generated hologram and projective measurements to realize the QFI-limited variance for the problem of super-resolving a closely spaced pair of point sources at a highly reduced photon cost. The paper presents a preliminary analysis of quantum-limited three-dimensional (3D) pair optical super-resolution (OSR) problem with potential applications to astronomical imaging and 3D space-debris localization.
Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U
2017-12-01
We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
The Influence of Hemispheric Dominance on Scores of the Myers-Briggs Type Indicator.
ERIC Educational Resources Information Center
Hartman, Steve E.; And Others
1997-01-01
Results for 75 medical students and 248 undergraduates suggest that the Myers-Briggs Type Indicator appears to sample only 3 bipolar personality dimensions rather than the 4 that the use of "type tables" implies. One of these dimensions shares substantial variance with the cognitive model of hemispheric dominance. (SLD)
Effects of trunk stability on isometric knee extension muscle strength measurement while sitting.
Hirano, Masahiro; Gomi, Masahiro; Katoh, Munenori
2016-09-01
[Purpose] This study aimed to investigate the effect of trunk stability on isometric knee extension muscle strength measurement while sitting by performing simultaneous measurements with a handheld dynamometer (HHD) and an isokinetic dynamometer (IKD) in the same seated condition. [Subjects and Methods] The subjects were 30 healthy volunteers. Isometric knee extension muscle strength was simultaneously measured with a HHD and an IKD by using an IKD-specific chair. The measurement was performed twice. Measurement instrument variables and the number of measurements were examined by using the analysis of variance and correlation tests. [Results] The measurement instrument variables and the number of measurements were not significantly different. The correlation coefficients between the HHD and IKD measurements were ≥0.96. [Conclusion] Isometric knee extension muscle strength measurement using the HHD in the sitting position resulted in a lower value than that using the IKD, presumably because of the effect of trunk stability on the measurement. In the same seated posture with trunk stability, no significant difference in measurement values was observed between the HHD and IKD. The present findings suggest that trunk stability while seated during isometric knee extension muscle strength measurement influenced the HHD measurement.
Modeling rainfall-runoff relationship using multivariate GARCH model
NASA Astrophysics Data System (ADS)
Modarres, R.; Ouarda, T. B. M. J.
2013-08-01
The traditional hydrologic time series approaches are used for modeling, simulating and forecasting conditional mean of hydrologic variables but neglect their time varying variance or the second order moment. This paper introduces the multivariate Generalized Autoregressive Conditional Heteroscedasticity (MGARCH) modeling approach to show how the variance-covariance relationship between hydrologic variables varies in time. These approaches are also useful to estimate the dynamic conditional correlation between hydrologic variables. To illustrate the novelty and usefulness of MGARCH models in hydrology, two major types of MGARCH models, the bivariate diagonal VECH and constant conditional correlation (CCC) models are applied to show the variance-covariance structure and cdynamic correlation in a rainfall-runoff process. The bivariate diagonal VECH-GARCH(1,1) and CCC-GARCH(1,1) models indicated both short-run and long-run persistency in the conditional variance-covariance matrix of the rainfall-runoff process. The conditional variance of rainfall appears to have a stronger persistency, especially long-run persistency, than the conditional variance of streamflow which shows a short-lived drastic increasing pattern and a stronger short-run persistency. The conditional covariance and conditional correlation coefficients have different features for each bivariate rainfall-runoff process with different degrees of stationarity and dynamic nonlinearity. The spatial and temporal pattern of variance-covariance features may reflect the signature of different physical and hydrological variables such as drainage area, topography, soil moisture and ground water fluctuations on the strength, stationarity and nonlinearity of the conditional variance-covariance for a rainfall-runoff process.
Effect of non-normality on test statistics for one-way independent groups designs.
Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R
2012-02-01
The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.
Extension of four-dimensional atmospheric models. [and cloud cover data bank
NASA Technical Reports Server (NTRS)
Fowler, M. G.; Lisa, A. S.; Tung, S. L.
1975-01-01
The cloud data bank, the 4-D atmospheric model, and a set of computer programs designed to simulate meteorological conditions for any location above the earth are described in turns of space vehicle design and simulation of vehicle reentry trajectories. Topics discussed include: the relationship between satellite and surface observed cloud cover using LANDSAT 1 photographs and including the effects of cloud shadows; extension of the 4-D model to the altitude of 52 km; and addition of the u and v wind components to the 4-D model of means and variances at 1 km levels from the surface to 25 km. Results of the cloud cover analysis are presented along with the stratospheric model and the tropospheric wind profiles.
Ishwaran, Hemant; Lu, Min
2018-06-04
Random forests are a popular nonparametric tree ensemble procedure with broad applications to data analysis. While its widespread popularity stems from its prediction performance, an equally important feature is that it provides a fully nonparametric measure of variable importance (VIMP). A current limitation of VIMP, however, is that no systematic method exists for estimating its variance. As a solution, we propose a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals. The method is general enough that it can be applied to many useful settings, including regression, classification, and survival problems. Using extensive simulations, we demonstrate the effectiveness of the subsampling estimator and in particular find that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampling rates due to its bias correction properties. These 2 estimators are highly competitive when compared with the .164 bootstrap estimator, a modified bootstrap procedure designed to deal with ties in out-of-sample data. Most importantly, subsampling is computationally fast, thus making it especially attractive for big data settings. Copyright © 2018 John Wiley & Sons, Ltd.
The Negative Impact of Organizational Cynicism on Physicians and Nurses
Volpe, Rebecca L.; Mohammed, Susan; Hopkins, Margaret; Shapiro, Daniel; Dellasega, Cheryl
2015-01-01
Despite the potentially severe consequences that could result, there is a paucity of research on organizational cynicism within US healthcare providers. In response, this study investigated the effect of cynicism on organizational commitment, job satisfaction, and interest in leaving the hospital for another job in a sample of 205 physicians and 842 nurses. Three types of cynicism were investigated: trait (dispositional), global (directed toward the hospital), and local (directed toward a specific unit or department). Findings indicate that all three types of cynicism were negatively related to affective organizational commitment and job satisfaction, but positively related to interest in leaving. In both nurse and physician samples, cynicism explained about half of the variance in job satisfaction and affective commitment, which is the type of commitment managers are most eager to promote. Cynicism accounted for about a quarter and a third of the variance in interest in leaving the hospital for nurses and physicians, respectively. Trait, global and local cynicism each accounted for unique variance in affective commitment, satisfaction, and interest in leaving, with global cynicism exerting the largest influence on each outcome. The implications for managers are that activities aimed at decreasing organizational cynicism are likely to increase affective organizational commitment, job satisfaction, and organizational tenure. PMID:25350015
Gaintantzopoulou, M D; El-Damanhoury, H M
The aim of the study was to evaluate the effect of preparation depth and intraradicular extension on the marginal and internal adaptation of computer-aided design/computer-assisted manufacture (CAD/CAM) endocrown restorations. Standardized preparations were made in resin endodontic tooth models (Nissin Dental), with an intracoronal preparation depth of 2 mm (group H2), with extra 1- (group H3) or 2-mm (group H4) intraradicular extensions in the root canals (n=12). Vita Enamic polymer-infiltrated ceramic-network material endocrowns were fabricated using the CEREC AC CAD/CAM system and were seated on the prepared teeth. Specimens were evaluated by microtomography. Horizontal and vertical tomographic sections were recorded and reconstructed by using the CTSkan software (TView v1.1, Skyscan).The surface/void volume (S/V) in the region of interest was calculated. Marginal gap (MG), absolute marginal discrepancy (MD), and internal marginal gap were measured at various measuring locations and calculated in microscale (μm). Marginal and internal discrepancy data (μm) were analyzed with nonparametric Kruskal-Wallis analysis of variance by ranks with Dunn's post hoc, whereas S/V data were analyzed by one-way analysis of variance and Bonferroni multiple comparisons (α=0.05). Significant differences were found in MG, MD, and internal gap width values between the groups, with H2 showing the lowest values from all groups. S/V calculations presented significant differences between H2 and the other two groups (H3 and H4) tested, with H2 again showing the lowest values. Increasing the intraradicular extension of endocrown restorations increased the marginal and internal gap of endocrown restorations.
Mcalister, Courtney; Schmitter-Edgecombe, Maureen; Lamb, Richard
2016-03-01
The objective of this meta-analysis was to improve understanding of the heterogeneity in the relationship between cognition and functional status in individuals with mild cognitive impairment (MCI). Demographic, clinical, and methodological moderators were examined. Cognition explained an average of 23% of the variance in functional outcomes. Executive function measures explained the largest amount of variance (37%), whereas global cognitive status and processing speed measures explained the least (20%). Short- and long-delayed memory measures accounted for more variance (35% and 31%) than immediate memory measures (18%), and the relationship between cognition and functional outcomes was stronger when assessed with informant-report (28%) compared with self-report (21%). Demographics, sample characteristics, and type of everyday functioning measures (i.e., questionnaire, performance-based) explained relatively little variance compared with cognition. Executive functioning, particularly measured by Trails B, was a strong predictor of everyday functioning in individuals with MCI. A large proportion of variance remained unexplained by cognition. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Impact of Land Use on PM2.5 Pollution in a Representative City of Middle China.
Yang, Haiou; Chen, Wenbo; Liang, Zhaofeng
2017-04-26
Fine particulate matter (PM 2.5 ) pollution has become one of the greatest urban issues in China. Studies have shown that PM 2.5 pollution is strongly related to the land use pattern at the micro-scale and optimizing the land use pattern has been suggested as an approach to mitigate PM 2.5 pollution. However, there are only a few researches analyzing the effect of land use on PM 2.5 pollution. This paper employed land use regression (LUR) models and statistical analysis to explore the effect of land use on PM 2.5 pollution in urban areas. Nanchang city, China, was taken as the study area. The LUR models were used to simulate the spatial variations of PM 2.5 concentrations. Analysis of variance and multiple comparisons were employed to study the PM 2.5 concentration variances among five different types of urban functional zones. Multiple linear regression was applied to explore the PM 2.5 concentration variances among the same type of urban functional zone. The results indicate that the dominant factor affecting PM 2.5 pollution in the Nanchang urban area was the traffic conditions. Significant variances of PM 2.5 concentrations among different urban functional zones throughout the year suggest that land use types generated a significant impact on PM 2.5 concentrations and the impact did not change as the seasons changed. Land use intensity indexes including the building volume rate, building density, and green coverage rate presented an insignificant or counter-intuitive impact on PM 2.5 concentrations when studied at the spatial scale of urban functional zones. Our study demonstrates that land use can greatly affect the PM 2.5 levels. Additionally, the urban functional zone was an appropriate spatial scale to investigate the impact of land use type on PM 2.5 pollution in urban areas.
Impact of Land Use on PM2.5 Pollution in a Representative City of Middle China
Yang, Haiou; Chen, Wenbo; Liang, Zhaofeng
2017-01-01
Fine particulate matter (PM2.5) pollution has become one of the greatest urban issues in China. Studies have shown that PM2.5 pollution is strongly related to the land use pattern at the micro-scale and optimizing the land use pattern has been suggested as an approach to mitigate PM2.5 pollution. However, there are only a few researches analyzing the effect of land use on PM2.5 pollution. This paper employed land use regression (LUR) models and statistical analysis to explore the effect of land use on PM2.5 pollution in urban areas. Nanchang city, China, was taken as the study area. The LUR models were used to simulate the spatial variations of PM2.5 concentrations. Analysis of variance and multiple comparisons were employed to study the PM2.5 concentration variances among five different types of urban functional zones. Multiple linear regression was applied to explore the PM2.5 concentration variances among the same type of urban functional zone. The results indicate that the dominant factor affecting PM2.5 pollution in the Nanchang urban area was the traffic conditions. Significant variances of PM2.5 concentrations among different urban functional zones throughout the year suggest that land use types generated a significant impact on PM2.5 concentrations and the impact did not change as the seasons changed. Land use intensity indexes including the building volume rate, building density, and green coverage rate presented an insignificant or counter-intuitive impact on PM2.5 concentrations when studied at the spatial scale of urban functional zones. Our study demonstrates that land use can greatly affect the PM2.5 levels. Additionally, the urban functional zone was an appropriate spatial scale to investigate the impact of land use type on PM2.5 pollution in urban areas. PMID:28445430
López-Mosquera, Natalia; García, Teresa; Barrena, Ramo
2014-03-15
This paper relates the concept of moral obligation and the components of the Theory of Planned Behavior to determine their influence on the willingness to pay of visitors for park conservation. The sample consists of 190 visitors to an urban Spanish park. The mean willingness to pay estimated was 12.67€ per year. The results also indicated that moral norm was the major factor in predicting behavioral intention, followed by attitudes. The new relations established between the components of the Theory of Planned Behavior show that social norms significantly determine the attitudes, moral norms and perceived behavioral control of individuals. The proportion of explained variance shows that the inclusion of moral norms improves the explanatory power of the original model of the Theory of Planned Behavior (32-40%). Community-based social marketing and local campaigns are the main strategies that should be followed by land managers with the objective of promoting responsible, pro-environmental attitudes as well as a greater willingness to pay for this type of goods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Brown, Halley J; Andreason, Hope; Melling, Amy K; Imel, Zac E; Simon, Gregory E
2015-08-01
Retention, or its opposite, dropout, is a common metric of psychotherapy quality, but using it to assess provider performance can be problematic. Differences among providers in numbers of general dropouts, "good" dropouts (patients report positive treatment experiences and outcome), and "bad" dropouts (patients report negative treatment experiences and outcome) were evaluated. Patient records were paired with satisfaction surveys (N=3,054). Binomial mixed-effects models were used to examine differences among providers by dropout type. Thirty-four percent of treatment episodes resulted in dropout. Of these, 14% were bad dropouts and 27% were good dropouts. Providers accounted for approximately 17% of the variance in general dropout and 10% of the variance in both bad dropout and good dropout. The ranking of providers fluctuated by type of dropout. Provider assessments based on patient retention should offer a way to isolate dropout type, given that nonspecific metrics may lead to biased estimates of performance.
Type Safe Extensible Programming
NASA Astrophysics Data System (ADS)
Chae, Wonseok
2009-10-01
Software products evolve over time. Sometimes they evolve by adding new features, and sometimes by either fixing bugs or replacing outdated implementations with new ones. When software engineers fail to anticipate such evolution during development, they will eventually be forced to re-architect or re-build from scratch. Therefore, it has been common practice to prepare for changes so that software products are extensible over their lifetimes. However, making software extensible is challenging because it is difficult to anticipate successive changes and to provide adequate abstraction mechanisms over potential changes. Such extensibility mechanisms, furthermore, should not compromise any existing functionality during extension. Software engineers would benefit from a tool that provides a way to add extensions in a reliable way. It is natural to expect programming languages to serve this role. Extensible programming is one effort to address these issues. In this thesis, we present type safe extensible programming using the MLPolyR language. MLPolyR is an ML-like functional language whose type system provides type-safe extensibility mechanisms at several levels. After presenting the language, we will show how these extensibility mechanisms can be put to good use in the context of product line engineering. Product line engineering is an emerging software engineering paradigm that aims to manage variations, which originate from successive changes in software.
Murray, Amanda M; Thomas, Abbey C; Armstrong, Charles W; Pietrosimone, Brian G; Tevald, Michael A
2015-12-01
Abnormal knee joint mechanics have been implicated in the pathogenesis and progression of knee osteoarthritis. Deficits in muscle function (i.e., strength and power) may contribute to abnormal knee joint loading. The associations between quadriceps strength, power and knee joint mechanics remain unclear in knee osteoarthritis. Three-dimensional motion analysis was used to collect peak knee joint angles and moments during the first 50% of stance phase of gait in 33 participants with knee osteoarthritis. Quadriceps strength and power were assessed using a knee extension machine. Strength was quantified as the one repetition maximum. Power was quantified as the peak power produced at 40-90% of the one repetition maximum. Quadriceps strength accounted for 15% of the variance in peak knee flexion angle (P=0.016). Quadriceps power accounted for 20-29% of the variance in peak knee flexion angle (P<0.05). Quadriceps power at 90% of one repetition maximum accounted for 9% of the variance in peak knee adduction moment (P=0.05). These data suggest that quadriceps power explains more variance in knee flexion angle and knee adduction moment during gait in knee osteoarthritis than quadriceps strength. Additionally, quadriceps power at multiple loads is associated with knee joint mechanics and therefore should be assessed at a variety of loads. Taken together, these results indicate that quadriceps power may be a potential target for interventions aimed at changing knee joint mechanics in knee osteoarthritis. Copyright © 2015 Elsevier Ltd. All rights reserved.
A note on the kappa statistic for clustered dichotomous data.
Zhou, Ming; Yang, Zhao
2014-06-30
The kappa statistic is widely used to assess the agreement between two raters. Motivated by a simulation-based cluster bootstrap method to calculate the variance of the kappa statistic for clustered physician-patients dichotomous data, we investigate its special correlation structure and develop a new simple and efficient data generation algorithm. For the clustered physician-patients dichotomous data, based on the delta method and its special covariance structure, we propose a semi-parametric variance estimator for the kappa statistic. An extensive Monte Carlo simulation study is performed to evaluate the performance of the new proposal and five existing methods with respect to the empirical coverage probability, root-mean-square error, and average width of the 95% confidence interval for the kappa statistic. The variance estimator ignoring the dependence within a cluster is generally inappropriate, and the variance estimators from the new proposal, bootstrap-based methods, and the sampling-based delta method perform reasonably well for at least a moderately large number of clusters (e.g., the number of clusters K ⩾50). The new proposal and sampling-based delta method provide convenient tools for efficient computations and non-simulation-based alternatives to the existing bootstrap-based methods. Moreover, the new proposal has acceptable performance even when the number of clusters is as small as K = 25. To illustrate the practical application of all the methods, one psychiatric research data and two simulated clustered physician-patients dichotomous data are analyzed. Copyright © 2014 John Wiley & Sons, Ltd.
A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality
NASA Astrophysics Data System (ADS)
Cheung, KW; So, HC; Ma, W.-K.; Chan, YT
2006-12-01
The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
Perspective: Structural fluctuation of protein and Anfinsen's thermodynamic hypothesis
NASA Astrophysics Data System (ADS)
Hirata, Fumio; Sugita, Masatake; Yoshida, Masasuke; Akasaka, Kazuyuki
2018-01-01
The thermodynamics hypothesis, casually referred to as "Anfinsen's dogma," is described theoretically in terms of a concept of the structural fluctuation of protein or the first moment (average structure) and the second moment (variance and covariance) of the structural distribution. The new theoretical concept views the unfolding and refolding processes of protein as a shift of the structural distribution induced by a thermodynamic perturbation, with the variance-covariance matrix varying. Based on the theoretical concept, a method to characterize the mechanism of folding (or unfolding) is proposed. The transition state, if any, between two stable states is interpreted as a gap in the distribution, which is created due to an extensive reorganization of hydrogen bonds among back-bone atoms of protein and with water molecules in the course of conformational change. Further perspective to applying the theory to the computer-aided drug design, and to the material science, is briefly discussed.
A comparison of portfolio selection models via application on ISE 100 index data
NASA Astrophysics Data System (ADS)
Altun, Emrah; Tatlidil, Hüseyin
2013-10-01
Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.
Evaluating causes of error in landmark-based data collection using scanners
Shearer, Brian M.; Cooke, Siobhán B.; Halenar, Lauren B.; Reber, Samantha L.; Plummer, Jeannette E.; Delson, Eric
2017-01-01
In this study, we assess the precision, accuracy, and repeatability of craniodental landmarks (Types I, II, and III, plus curves of semilandmarks) on a single macaque cranium digitally reconstructed with three different surface scanners and a microCT scanner. Nine researchers with varying degrees of osteological and geometric morphometric knowledge landmarked ten iterations of each scan (40 total) to test the effects of scan quality, researcher experience, and landmark type on levels of intra- and interobserver error. Two researchers additionally landmarked ten specimens from seven different macaque species using the same landmark protocol to test the effects of the previously listed variables relative to species-level morphological differences (i.e., observer variance versus real biological variance). Error rates within and among researchers by scan type were calculated to determine whether or not data collected by different individuals or on different digitally rendered crania are consistent enough to be used in a single dataset. Results indicate that scan type does not impact rate of intra- or interobserver error. Interobserver error is far greater than intraobserver error among all individuals, and is similar in variance to that found among different macaque species. Additionally, experience with osteology and morphometrics both positively contribute to precision in multiple landmarking sessions, even where less experienced researchers have been trained in point acquisition. Individual training increases precision (although not necessarily accuracy), and is highly recommended in any situation where multiple researchers will be collecting data for a single project. PMID:29099867
On the mean and variance of the writhe of random polygons.
Portillo, J; Diao, Y; Scharein, R; Arsuaga, J; Vazquez, M
We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an "ideal" conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n ) behaves as a linear function of the length of the equilateral random polygon.
On the mean and variance of the writhe of random polygons
Portillo, J.; Diao, Y.; Scharein, R.; Arsuaga, J.; Vazquez, M.
2013-01-01
We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an “ideal” conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n) behaves as a linear function of the length of the equilateral random polygon. PMID:25685182
French, David P; Wade, Alisha N; Farmer, Andrew J
2013-04-01
There is evidence that perceptions of treatment may be more predictive than illness perceptions, e.g. medication adherence is often better predicted by beliefs about medication than by beliefs about illness. The present study aims to assess the generality of this finding, by comparing the extent to which self-care behaviours of patients with type 2 diabetes are predicted by patients' beliefs about those behaviours, compared with their illness perceptions. This study is a one year prospective cohort analysis of 453 patients recruited to a randomised trial of blood glucose self-monitoring. Behaviour was assessed by the medication adherence report scale (MARS) and diabetes self-care activities (DSCA) scales; illness perceptions by IPQ-R; study-specific scales of beliefs about diet and physical activity were constructed by factor analysing items based on beliefs elicited in an earlier interview study involving patients with type 2 diabetes. Past behaviour, trial group allocation, and clinical and demographic factors predicted between 16% and 35% variance in medication adherence, exercise, and diet scales. Illness perceptions added between 0.9% and 4.5% additional variance; beliefs about behaviour added a further 1.1% to 6.4% additional variance. Beliefs regarding, respectively, the importance of exercise in controlling diabetes, the need to east less, and enjoyment from eating sweet or fatty food, added unique variance. Beliefs about behaviour are at least as important as beliefs about illness in predicting several health-related behaviours. This suggests the possibility that behaviour change interventions with patient groups would be more effective by targeting beliefs about behaviour, rather than beliefs about illness. Copyright © 2012 Elsevier Inc. All rights reserved.
Performance of Language-Coordinated Collective Systems: A Study of Wine Recognition and Description
Zubek, Julian; Denkiewicz, Michał; Dębska, Agnieszka; Radkowska, Alicja; Komorowska-Mach, Joanna; Litwin, Piotr; Stępień, Magdalena; Kucińska, Adrianna; Sitarska, Ewa; Komorowska, Krystyna; Fusaroli, Riccardo; Tylén, Kristian; Rączaszek-Leonardi, Joanna
2016-01-01
Most of our perceptions of and engagements with the world are shaped by our immersion in social interactions, cultural traditions, tools and linguistic categories. In this study we experimentally investigate the impact of two types of language-based coordination on the recognition and description of complex sensory stimuli: that of red wine. Participants were asked to taste, remember and successively recognize samples of wines within a larger set in a two-by-two experimental design: (1) either individually or in pairs, and (2) with or without the support of a sommelier card—a cultural linguistic tool designed for wine description. Both effectiveness of recognition and the kinds of errors in the four conditions were analyzed. While our experimental manipulations did not impact recognition accuracy, bias-variance decomposition of error revealed non-trivial differences in how participants solved the task. Pairs generally displayed reduced bias and increased variance compared to individuals, however the variance dropped significantly when they used the sommelier card. The effect of sommelier card reducing the variance was observed only in pairs, individuals did not seem to benefit from the cultural linguistic tool. Analysis of descriptions generated with the aid of sommelier cards shows that pairs were more coherent and discriminative than individuals. The findings are discussed in terms of global properties and dynamics of collective systems when constrained by different types of cultural practices. PMID:27729875
Smoothing of the bivariate LOD score for non-normal quantitative traits.
Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John
2005-12-30
Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.
Ghamari Kivi, Hossein; Mohammadipour Rik, Ne'mat; Sadeghi Movahhed, Fariba
2013-01-01
Thought-action fusion (TAF) refers to the tendency to assume incorrect causal relationship between one's own thoughts and external reality, in which, thoughts and actions are treated as equivalents. This construct is present to development and maintenance of many psychological disorders. The aim of the present study was to predict obsessive-compulsive disorder (OCD) and its types, and major depressive disorder (MDD) with TAF and its levels. Two groups, included 50 persons with OCD and MDD, respectively, were selected by convenience sampling method in private and governmental psychiatric centers in Ardabil, Iran. Then, they responded to Beck Depression Inventory, Padua Inventory and TAF scale. Data were analysed using multiple regressions analysis by stepwise method. TAF or its subtypes (moral TAF, likelihood-self TAF and likelihood-others TAF) can explain 14% of MDD variance (p < 0.01), 15% of OCD variance (p < 0.01), and 8-21% of OCD types variance (p < 0.05). Moral TAF had high levels in OCD and MDD. The construct of TAF is not specific factor for OCD, and it is present in MDD, too. None.
Inverse Optimization: A New Perspective on the Black-Litterman Model.
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch
2012-12-11
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.
Statistical aspects of quantitative real-time PCR experiment design.
Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales
2010-04-01
Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.
Gallegos-Cabriales, Esther C; Rivera-Castillo, Alicia; González-Cantú, Arnulfo; Gómez-Meza, Marco Vinicio; Villarreal-Pérez, Jesús Zacarías
2018-01-01
Objectives: Type 2 diabetes mellitus studies focus on metabolic indicators and different self-reported lifestyle or care behaviors. Self-reported instruments involve conscious process therefore responses might not reflect reality. Meanwhile implicit responses involve automatic, unconscious processes underlying social judgments and behavior. No studies have explored the combined influence of both metabolic indicators and implicit responses on lifestyle practices in type 2 diabetes mellitus patients. The purpose was to investigate the explained variance of socio-demographic, metabolic, anthropometric, clinical, psychosocial, cognitive, and lifestyle variables on glycemic status and on the ability to adapt to changing demands in people with and without type 2 diabetes mellitus in Monterrey, Mexico. Methods: Adults with (n = 30, mean age 46.90 years old, 33.33% male) and without (n = 32, mean age: 41.69 years old, 21.87% male) type 2 diabetes mellitus were studied. Glycemic status was assessed using Bio-Rad D-10 Hemoglobin A1c Program, which uses ion-exchange high-performance chromatography. Stroop 2 test was used to assess the ability to changing demands. Results: In participants with type 2 diabetes mellitus, less years of education, negative self-actualization, and higher levels of cholesterol and triglycerides explained more than 50% of the variance in glycemic status. In participants without type 2 diabetes mellitus, the variance (38.7%) was explained by total cholesterol, metabolic syndrome, high-density lipoprotein, and self-actualization scores; the latter in opposite direction. The ability to adapt to changing demands was explained by total cholesterol, malondialdehyde, insulin resistance, and triglycerides. In participants without type 2 diabetes mellitus, the contributing variables were metabolic syndrome and nutrition scores. Conclusion: Results showed significant effect on at least one of the following variables (socio-demographic, metabolic, or lifestyle subscale) on glycemic status in people with and without type 2 diabetes mellitus. The ability to adapt to changing demands was explained by metabolic variables but only in participants without type 2 diabetes mellitus. Preference for unhealthy behaviors (implicit or automatic responses) outweighs healthy lifestyle practices in people with and without type 2 diabetes mellitus. PMID:29760917
Salazar-González, Bertha Cecilia; Gallegos-Cabriales, Esther C; Rivera-Castillo, Alicia; González-Cantú, Arnulfo; Gómez-Meza, Marco Vinicio; Villarreal-Pérez, Jesús Zacarías
2018-01-01
Type 2 diabetes mellitus studies focus on metabolic indicators and different self-reported lifestyle or care behaviors. Self-reported instruments involve conscious process therefore responses might not reflect reality. Meanwhile implicit responses involve automatic, unconscious processes underlying social judgments and behavior. No studies have explored the combined influence of both metabolic indicators and implicit responses on lifestyle practices in type 2 diabetes mellitus patients. The purpose was to investigate the explained variance of socio-demographic, metabolic, anthropometric, clinical, psychosocial, cognitive, and lifestyle variables on glycemic status and on the ability to adapt to changing demands in people with and without type 2 diabetes mellitus in Monterrey, Mexico. Adults with (n = 30, mean age 46.90 years old, 33.33% male) and without (n = 32, mean age: 41.69 years old, 21.87% male) type 2 diabetes mellitus were studied. Glycemic status was assessed using Bio-Rad D-10 Hemoglobin A1c Program, which uses ion-exchange high-performance chromatography. Stroop 2 test was used to assess the ability to changing demands. In participants with type 2 diabetes mellitus, less years of education, negative self-actualization, and higher levels of cholesterol and triglycerides explained more than 50% of the variance in glycemic status. In participants without type 2 diabetes mellitus, the variance (38.7%) was explained by total cholesterol, metabolic syndrome, high-density lipoprotein, and self-actualization scores; the latter in opposite direction. The ability to adapt to changing demands was explained by total cholesterol, malondialdehyde, insulin resistance, and triglycerides. In participants without type 2 diabetes mellitus, the contributing variables were metabolic syndrome and nutrition scores. Results showed significant effect on at least one of the following variables (socio-demographic, metabolic, or lifestyle subscale) on glycemic status in people with and without type 2 diabetes mellitus. The ability to adapt to changing demands was explained by metabolic variables but only in participants without type 2 diabetes mellitus. Preference for unhealthy behaviors (implicit or automatic responses) outweighs healthy lifestyle practices in people with and without type 2 diabetes mellitus.
Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models
2008-08-01
Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Fyodorov, Yan V.; Le Doussal, Pierre
2018-02-01
We address systematically an apparent nonphysical behavior of the free-energy moment generating function for several instances of the logarithmically correlated models: the fractional Brownian motion with Hurst index H =0 (fBm0) (and its bridge version), a one-dimensional model appearing in decaying Burgers turbulence with log-correlated initial conditions and, finally, the two-dimensional log-correlated random-energy model (logREM) introduced in Cao et al. [Phys. Rev. Lett. 118, 090601 (2017), 10.1103/PhysRevLett.118.090601] based on the two-dimensional Gaussian free field with background charges and directly related to the Liouville field theory. All these models share anomalously large fluctuations of the associated free energy, with a variance proportional to the log of the system size. We argue that a seemingly nonphysical vanishing of the moment generating function for some values of parameters is related to the termination point transition (i.e., prefreezing). We study the associated universal log corrections in the frozen phase, both for logREMs and for the standard REM, filling a gap in the literature. For the above mentioned integrable instances of logREMs, we predict the nontrivial free-energy cumulants describing non-Gaussian fluctuations on the top of the Gaussian with extensive variance. Some of the predictions are tested numerically.
A powerful and flexible approach to the analysis of RNA sequence count data
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A.
2011-01-01
Motivation: A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean–variance relationships provides a flexible testing regimen that ‘borrows’ information across genes, while easily incorporating design effects and additional covariates. Results: We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean–variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. Availability: An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq Contact: yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21810900
Reecht, Y; Rochet, M-J; Trenkel, V M; Jennings, S; Pinnegar, J K
2013-08-01
An ecomorphological method was developed, with a focus on predation functions, to define functional groups in the Celtic Sea fish community. Eleven functional traits, measured for 930 individuals from 33 species, led to 11 functional groups. Membership of functional groups was linked to body size and taxonomy. For seven species, there were ontogenetic changes in group membership. When diet composition, expressed as the proportions of different prey types recorded in stomachs, was compared among functional groups, morphology-based predictions accounted for 28-56% of the interindividual variance in prey type. This was larger than the 12-24% of variance that could be explained solely on the basis of body size. © 2013 The Fisheries Society of the British Isles.
Mauer, Michael; Caramori, Maria Luiza; Fioretto, Paola; Najafian, Behzad
2015-06-01
Studies of structural-functional relationships have improved understanding of the natural history of diabetic nephropathy (DN). However, in order to consider structural end points for clinical trials, the robustness of the resultant models needs to be verified. This study examined whether structural-functional relationship models derived from a large cohort of type 1 diabetic (T1D) patients with a wide range of renal function are robust. The predictability of models derived from multiple regression analysis and piecewise linear regression analysis was also compared. T1D patients (n = 161) with research renal biopsies were divided into two equal groups matched for albumin excretion rate (AER). Models to explain AER and glomerular filtration rate (GFR) by classical DN lesions in one group (T1D-model, or T1D-M) were applied to the other group (T1D-test, or T1D-T) and regression analyses were performed. T1D-M-derived models explained 70 and 63% of AER variance and 32 and 21% of GFR variance in T1D-M and T1D-T, respectively, supporting the substantial robustness of the models. Piecewise linear regression analyses substantially improved predictability of the models with 83% of AER variance and 66% of GFR variance explained by classical DN glomerular lesions alone. These studies demonstrate that DN structural-functional relationship models are robust, and if appropriate models are used, glomerular lesions alone explain a major proportion of AER and GFR variance in T1D patients. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Anger Expression Types and Interpersonal Problems in Nurses.
Han, Aekyung; Won, Jongsoon; Kim, Oksoo; Lee, Sang E
2015-06-01
The purpose of this study was to investigate the anger expression types in nurses and to analyze the differences between the anger expression types and interpersonal problems. The data were collected from 149 nurses working in general hospitals with 300 beds or more in Seoul or Gyeonggi province, Korea. For anger expression type, the anger expression scale from the Korean State-Trait Anger Expression Inventory was used. For interpersonal problems, the short form of the Korean Inventory of Interpersonal Problems Circumplex Scales was used. Data were analyzed using descriptive statistics, cluster analysis, multivariate analysis of variance, and Duncan's multiple comparisons test. Three anger expression types in nurses were found: low-anger expression, anger-in, and anger-in/control type. From the results of multivariate analysis of variance, there were significant differences between anger expression types and interpersonal problems (Wilks lambda F = 3.52, p < .001). Additionally, anger-in/control type was found to have the most difficulty with interpersonal problems by Duncan's post hoc test (p < .050). Based on this research, the development of an anger expression intervention program for nurses is recommended to establish the means of expressing the suppressed emotions, which would help the nurses experience less interpersonal problems. Copyright © 2015. Published by Elsevier B.V.
Social capital and health-purely a question of context?
Giordano, Giuseppe Nicola; Ohlsson, Henrik; Lindström, Martin
2011-07-01
Debate still surrounds which level of analysis (individual vs. contextual) is most appropriate to investigate the effects of social capital on health. Applying multilevel ecometric analyses to British Household Panel Survey data, we estimated fixed and random effects between five individual-, household- and small area-level social capital indicators and general health. We further compared the variance in health attributable to each level using intraclass correlations. Our results demonstrate that association between social capital and health depends on indicator type and level investigated, with one quarter of total individual-level health variance found at the household level. However, individual-level social capital variables and other health determinants appear to influence contextual-level variance the most. Copyright © 2011 Elsevier Ltd. All rights reserved.
Monogamy has a fixation advantage based on fitness variance in an ideal promiscuity group.
Garay, József; Móri, Tamás F
2012-11-01
We consider an ideal promiscuity group of females, which implies that all males have the same average mating success. If females have concealed ovulation, then the males' paternity chances are equal. We find that male-based monogamy will be fixed in females' promiscuity group when the stochastic Darwinian selection is described by a Markov chain.We point out that in huge populations the relative advantage (difference between average fitness of different strategies) determines primarily the end of evolution; in the case of neutrality (means are equal) the smallest variance guarantees fixation (absorption) advantage; when the means and variances are the same, then the higher third moment determines which types will be fixed in the Markov chains.
Interactive Effects of Cumulative Stress and Impulsivity on Alcohol Consumption
Fox, Helen C.; Bergquist, Keri L.; Gu, Peihua; Sinha, Rajita
2013-01-01
Background Alcohol addiction may reflect adaptations to stress, reward, and regulatory brain systems. While extensive research has identified both stress and impulsivity as independent risk factors for drinking, few studies have assessed the interactive relationship between stress and impulsivity in terms of hazardous drinking within a community sample of regular drinkers. Methods One hundred and thirty regular drinkers (56M/74F) from the local community were assessed for hazardous and harmful patterns of alcohol consumption using the Alcohol Use Disorders Identification Test (AUDIT). All participants were also administered the Barratt Impulsiveness Scale (BIS-11) as a measure of trait impulsivity and the Cumulative Stress/Adversity Checklist (CSC) as a comprehensive measure of cumulative adverse life events. Standard multiple regression models were used to ascertain the independent and interactive nature of both overall stress and impulsivity as well as specific types of stress and impulsivity on hazardous and harmful drinking. Results Recent life stress, cumulative traumatic stress, overall impulsivity, and nonplanning-related impulsivity as well as cognitive and motor-related impulsivity were all independently predictive of AUDIT scores. However, the interaction between cumulative stress and total impulsivity scores accounted for a significant amount of the variance, indicating that a high to moderate number of adverse events and a high trait impulsivity rating interacted to affect greater AUDIT scores. The subscale of cumulative life trauma accounted for the most variance in AUDIT scores among the stress and impulsivity subscales. Conclusions Findings highlight the interactive relationship between stress and impulsivity with regard to hazardous drinking. The specific importance of cumulative traumatic stress as a marker for problem drinking is also discussed. PMID:20491738
Burton, Carmen; Brown, Larry R.; Belitz, Kenneth
2005-01-01
The Santa Ana River basin is the largest stream system in Southern California and includes a densely populated coastal area. Extensive urbanization has altered the geomorphology and hydrology of the streams, adversely affecting aquatic communities. We studied macroinvertebrate and periphyton assemblages in relation to two categorical features of the highly engineered hydrologic system-water source and channel type. Four water sources were identified-natural, urban-impacted groundwater, urban runoff, and treated wastewater. Three channel types were identified-natural, channelized with natural bottom, and concrete-lined. Nineteen sites, covering the range of these two categorical features, were sampled in summer 2000. To minimize the effects of different substrate types among sites, artificial substrates were used for assessing macroinvertebrate and periphyton assemblages. Physical and chemical variables and metrics calculated from macroinvertebrate and periphyton assemblage data were compared among water sources and channel types using analysis of variance and multiple comparison tests. Macroinvertebrate metrics exhibiting significant (P < 0.05) differences between water sources included taxa and Ephemeroptera-Plecoptera-Trichoptera richness, relative richness and abundance of nonchironomid dipterans, orthoclads, oligochaetes, and some functional-feeding groups such as parasites and shredders. Periphyton metrics showing significant differences between water sources included blue-green algae biovolume and relative abundance of nitrogen heterotrophic, eutrophic, motile, and pollution-sensitive diatoms. The relative abundance of trichopterans, tanytarsini chironomids, noninsects, and filter feeders, as well as the relative richness and abundance of diatoms, were significantly different between channel types. Most physical variables were related to channel type, whereas chemical variables and some physical variables (e.g., discharge, velocity, and channel width) were related to water source. These associations were reflected in correlations between metrics, chemical variables, and physical variables. Significant improvements in the aquatic ecosystem of the Santa Ana River basin are possible with management actions such as conversion of concrete-lined channels to channelized streams with natural bottoms that can still maintain flood control to protect life and property.
ERIC Educational Resources Information Center
Hartley, S. L.; MacLean, W. E., Jr.
2006-01-01
Background: Likert-type scales are increasingly being used among people with intellectual disability (ID). These scales offer an efficient method for capturing a wide range of variance in self-reported attitudes and behaviours. This review is an attempt to evaluate the reliability and validity of Likert-type scales in people with ID. Methods:…
Repeatability and reproducibility of ribotyping and its computer interpretation.
Lefresne, Gwénola; Latrille, Eric; Irlinger, Françoise; Grimont, Patrick A D
2004-04-01
Many molecular typing methods are difficult to interpret because their repeatability (within-laboratory variance) and reproducibility (between-laboratory variance) have not been thoroughly studied. In the present work, ribotyping of coryneform bacteria was the basis of a study involving within-gel and between-gel repeatability and between-laboratory reproducibility (two laboratories involved). The effect of different technical protocols, different algorithms, and different software for fragment size determination was studied. Analysis of variance (ANOVA) showed, within a laboratory, that there was no significant added variance between gels. However, between-laboratory variance was significantly higher than within-laboratory variance. This may be due to the use of different protocols. An experimental function was calculated to transform the data and make them compatible (i.e., erase the between-laboratory variance). The use of different interpolation algorithms (spline, Schaffer and Sederoff) was a significant source of variation in one laboratory only. The use of either Taxotron (Institut Pasteur) or GelCompar (Applied Maths) was not a significant source of added variation when the same algorithm (spline) was used. However, the use of Bio-Gene (Vilber Lourmat) dramatically increased the error (within laboratory, within gel) in one laboratory, while decreasing the error in the other laboratory; this might be due to automatic normalization attempts. These results were taken into account for building a database and performing automatic pattern identification using Taxotron. Conversion of the data considerably improved the identification of patterns irrespective of the laboratory in which the data were obtained.
NASA Astrophysics Data System (ADS)
Kuffner, Ilsa B.; Roberts, Kelsey E.; Flannery, Jennifer A.; Morrison, Jennifer M.; Richey, Julie N.
2017-01-01
Massive corals provide a useful archive of environmental variability, but careful testing of geochemical proxies in corals is necessary to validate the relationship between each proxy and environmental parameter throughout the full range of conditions experienced by the recording organisms. Here we use samples from a coral-growth study to test the hypothesis that Sr/Ca in the coral Siderastrea siderea accurately records sea-surface temperature (SST) in the subtropics (Florida, USA) along 350 km of reef tract. We test calcification rate, measured via buoyant weight, and linear extension (LE) rate, estimated with Alizarin Red-S staining, as predictors of variance in the Sr/Ca records of 39 individual S. siderea corals grown at four outer-reef locations next to in-situ temperature loggers during two, year-long periods. We found that corals with calcification rates < 1.7 mg cm-2 d-1 or < 1.7 mm yr-1 LE returned spuriously high Sr/Ca values, leading to a cold-bias in Sr/Ca-based SST estimates. The threshold-type response curves suggest that extension rate can be used as a quality-control indicator during sample and drill-path selection when using long cores for SST paleoreconstruction. For our corals that passed this quality control step, the Sr/Ca-SST proxy performed well in estimating mean annual temperature across three sites spanning 350 km of the Florida reef tract. However, there was some evidence that extreme temperature stress in 2010 (cold snap) and 2011 (SST above coral-bleaching threshold) may have caused the corals not to record the temperature extremes. Known stress events could be avoided during modern calibrations of paleoproxies.
Poisson denoising on the sphere: application to the Fermi gamma ray space telescope
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.
2010-07-01
The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.
Lindsey, Derek P; Perez-Orribo, Luis; Rodriguez-Martinez, Nestor; Reyes, Phillip M; Newcomb, Anna; Cable, Alexandria; Hickam, Grace; Yerby, Scott A; Crawford, Neil R
2014-01-01
Introduction Sacroiliac (SI) joint pain has become a recognized factor in low back pain. The purpose of this study was to investigate the effect of a minimally invasive surgical SI joint fusion procedure on the in vitro biomechanics of the SI joint before and after cyclic loading. Methods Seven cadaveric specimens were tested under the following conditions: intact, posterior ligaments (PL) and pubic symphysis (PS) cut, treated (three implants placed), and after 5,000 cycles of flexion–extension. The range of motion (ROM) in flexion–extension, lateral bending, and axial rotation was determined with an applied 7.5 N · m moment using an optoelectronic system. Results for each ROM were compared using a repeated measures analysis of variance (ANOVA) with a Holm–Šidák post-hoc test. Results Placement of three fusion devices decreased the flexion–extension ROM. Lateral bending and axial rotation were not significantly altered. All PL/PS cut and post-cyclic ROMs were larger than in the intact condition. The 5,000 cycles of flexion–extension did not lead to a significant increase in any ROMs. Discussion In the current model, placement of three 7.0 mm iFuse Implants significantly decreased the flexion–extension ROM. Joint ROM was not increased by 5,000 flexion–extension cycles. PMID:24868175
2010-02-01
Findings also highlighl the impact of homefront and poSl-deploymentlife events in addition to war -zone stress exposures, and emphasize the imponance of...additional 20% of the variance; Wlr-7.0ne stTessors and perceived war -zone threat together contributed an additional 19% of the variance; and homefront ...in the types of noncombat (i.e., post battle) war -zone events experienced by the two groups. Homefront concerns experienced during deployment were
NASA Astrophysics Data System (ADS)
Mallon, Gerald L.; Bruce, Matthew H.
Of the 1100 planetariums in the U.S., approximately 96% are smaller facilities. The majority of these use a program type called the Star Show, whereas some have advocated a different type called the Participatory Oriented Planetarium. The purpose of this study was to investigate the following question: In a smaller educational planetarium, with a capacity of between 15-75 people, is a traditional Star Show planetarium program, or a Participatory Oriented Planetarium program the most effective method of instruction and attitude change? A large scale investigation was conducted in Pennsylvania, with four smaller replications in Texas, Minnesota, California, and Nevada. In each planetarium, a group of 8-10 year old students were identified and randomly assigned to groups. 556 students were tested. The testing instruments included a paper-and-pencil content test and a Likert-style science opinionnaire. The instructional programs were chosen from existing scripts to avoid bias in their construction. Both programs dealt with constellation study. Correlated t tests were used to compare pretest to posttest scores and two-way factorial analyses of variance were used to compare the groups' posttest scores. It was concluded that, The Participatory Oriented Planetarium program, utilizing an activity-based format and extensive verbal interaction, is clearly the more effective utilization of a small planetarium facility for teaching constellation study and possibly for improving students' attitudes towards astronomy and the planetarium.
Relationships between aerodynamic roughness and land use and land cover in Baltimore, Maryland
Nicholas, F.W.; Lewis, J.E.
1980-01-01
Urbanization changes the radiative, thermal, hydrologic, and aerodynamic properties of the Earth's surface. Knowledge of these surface characteristics, therefore, is essential to urban climate analysis. Aerodynamic or surface roughness of urban areas is not well documented, however, because of practical constraints in measuring the wind profile in the presence of large buildings. Using an empirical method designed by Lettau, and an analysis of variance of surface roughness values calculated for 324 samples averaging 0.8 hectare (ha) of land use and land cover sample in Baltimore, Md., a strong statistical relation was found between aerodynamic roughness and urban land use and land cover types. Assessment of three land use and land cover systems indicates that some of these types have significantly different surface roughness characteristics. The tests further indicate that statistically significant differences exist in estimated surface roughness values when categories (classes) from different land use and land cover classification systems are used as surrogates. A Level III extension of the U.S. Geological Survey Level II land use and land cover classification system provided the most reliable results. An evaluation of the physical association between the aerodynamic properties of land use and land cover and the surface climate by numerical simulation of the surface energy balance indicates that changes in surface roughness within the range of values typical of the Level III categories induce important changes in the surface climate.
Behennah, Jessica; Conway, Rebecca; Fisher, James; Osborne, Neil; Steele, James
2018-03-01
Chronic low back pain is associated with lumbar extensor deconditioning. This may contribute to decreased neuromuscular control and balance. However, balance is also influenced by the hip musculature. Thus, the purpose of this study was to examine balance in both asymptomatic participants and those with chronic low back pain, and to examine the relationships among balance, lumbar extension strength, trunk extension endurance, and pain. Forty three asymptomatic participants and 21 participants with non-specific chronic low back pain underwent balance testing using the Star Excursion Balance Test, lumbar extension strength, trunk extension endurance, and pain using a visual analogue scale. Significant correlations were found between lumbar extension strength and Star Excursion Balance Test scores in the chronic low back pain group (r = 0.439-0.615) and in the asymptomatic group (r = 0.309-0.411). Correlations in the chronic low back pain group were consistently found in posterior directions. Lumbar extension strength explained ~19.3% to ~37.8% of the variance in Star Excursion Balance Test scores for the chronic low back pain group and ~9.5% to ~16.9% for the asymptomatic group. These results suggest that the lumbar extensors may be an important factor in determining the motor control dysfunctions, such as limited balance, that arise in chronic low back pain. As such, specific strengthening of this musculature may be an approach to aid in reversing these dysfunctions. Copyright © 2018 Elsevier Ltd. All rights reserved.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
2010-07-01
cluster input can look like a Fractional Brownian motion even in the slow growth regime’’. Advances in Applied Probability, 41(2), 393-427. Yeghiazarian, L... Brownian motion ? Ann. Appl. Probab., 12(1):23–68, 2002. [10] A. Mitra and S.I. Resnick. Hidden domain of attraction: extension of hidden regular variation...variance? A paradox and an explanation’’. Quantitative Finance , 1, 11 pages. Hult, H. and Samorodnitsky, G. (2010) ``Large deviations for point
A multiple-objective optimal exploration strategy
Christakos, G.; Olea, R.A.
1988-01-01
Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.
Branscum, Paul; Sharma, Manoj
2014-01-01
The purpose of this study was to use the theory of planned behavior to explain two types of snack food consumption among boys and girls (girls n = 98; boys n = 69), which may have implications for future theory-based health promotion interventions. Between genders, there was a significant difference for calorie-dense/nutrient-poor snacks (p = .002), but no difference for fruit and vegetable snacks. Using stepwise multiple regression, attitudes, perceived behavioral control, and subjective norms accounted for a large amount of the variance of intentions (girls = 43.3%; boys = 55.9%); however, for girls, subjective norms accounted for the most variance, whereas for boys, attitudes accounted for the most variance. Calories from calorie-dense/nutrient-poor snacks and fruit and vegetable snacks were also predicted by intentions. For boys, intentions predicted 6.4% of the variance for fruit and vegetable snacks (p = .03) but was not significant for calorie-dense/nutrient-poor snacks, whereas for girls, intentions predicted 6.0% of the variance for fruit and vegetable snacks (p = .007), and 7.2% of the variance for calorie-dense/nutrient-poor snacks (p = .004). Results suggest that the theory of planned behavior is a useful framework for predicting snack foods among children; however, there are important differences between genders that should be considered in future health promotion interventions.
Location tests for biomarker studies: a comparison using simulations for the two-sample case.
Scheinhardt, M O; Ziegler, A
2013-01-01
Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.
Locus equations and coarticulation in three Australian languages.
Graetzer, Simone; Fletcher, Janet; Hajek, John
2015-02-01
Locus equations were applied to F2 data for bilabial, alveolar, retroflex, palatal, and velar plosives in three Australian languages. In addition, F2 variance at the vowel-consonant boundary, and, by extension, consonantal coarticulatory sensitivity, was measured. The locus equation slopes revealed that there were place-dependent differences in the magnitude of vowel-to-consonant coarticulation. As in previous studies, the non-coronal (bilabial and velar) consonants tended to be associated with the highest slopes, palatal consonants tended to be associated with the lowest slopes, and alveolar and retroflex slopes tended to be low to intermediate. Similarly, F2 variance measurements indicated that non-coronals displayed greater coarticulatory sensitivity to adjacent vowels than did coronals. Thus, both the magnitude of vowel-to-consonant coarticulation and the magnitude of consonantal coarticulatory sensitivity were seen to vary inversely with the magnitude of consonantal articulatory constraint. The findings indicated that, unlike results reported previously for European languages such as English, anticipatory vowel-to-consonant coarticulation tends to exceed carryover coarticulation in these Australian languages. Accordingly, on the F2 variance measure, consonants tended to be more sensitive to the coarticulatory effects of the following vowel. Prosodic prominence of vowels was a less significant factor in general, although certain language-specific patterns were observed.
CR extension from hypersurfaces of higher type
NASA Astrophysics Data System (ADS)
Baracco, Luca
2007-07-01
We prove extension of CR functions from a hypersurface M of in presence of the so-called sector property. If M has finite type in the Bloom-Graham sense, then our result is already contained in [C. Rea, Prolongement holomorphe des fonctions CR, conditions suffisantes, C. R. Acad. Sci. Paris 297 (1983) 163-166] by Rea. We think however, that the argument of our proof carries an expressive geometric meaning and deserves interest on its own right. Also, our method applies in some case to hypersurfaces of infinite type; note that for these, the classical methods fail. CR extension is treated by many authors mainly in two frames: extension in directions of iterated of commutators of CR vector fields (cf., for instance, [A. Boggess, J. Pitts, CR extension near a point of higher type, Duke Math. J. 52 (1) (1985) 67-102; A. Boggess, J.C. Polking, Holomorphic extension of CR functions, Duke Math. J. 49 (1982) 757-784. ; M.S. Baouendi, L. Rothschild, Normal forms for generic manifolds and holomorphic extension of CR functions, J. Differential Geom. 25 (1987) 431-467. ]); extension through minimality towards unprecised directions [A.E. Tumanov, Extension of CR-functions into a wedge, Mat. Sb. 181 (7) (1990) 951-964. ; A.E. Tumanov, Analytic discs and the extendibility of CR functions, in: Integral Geometry, Radon Transforms and Complex Analysis, Venice, 1996, in: Lecture Notes in Math., vol. 1684, Springer, Berlin, 1998, pp. 123-141].
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
Wilson, Christina K; Padrón, Elena; Samuelson, Kristin W
2017-02-01
Trauma exposure is associated with various parenting difficulties, but few studies have examined relationships between trauma, posttraumatic stress disorder (PTSD), and parenting stress. Parenting stress is an important facet of parenting and mediates the relationship between parental trauma exposure and negative child outcomes (Owen, Thompson, & Kaslow, 2006). We examined trauma type (child maltreatment, intimate partner violence, community violence, and non-interpersonal traumas) and PTSD symptoms as predictors of parenting stress in a sample of 52 trauma-exposed mothers. Community violence exposure and PTSD symptom severity accounted for significant variance in parenting stress. Further analyses revealed that emotional numbing was the only PTSD symptom cluster accounting for variance in parenting stress scores. Results highlight the importance of addressing community violence exposure and emotion regulation difficulties with trauma-exposed mothers.
Measuring relative work values for home care nursing services in Japan.
Ogata, Yasuko; Kobayashi, Yasuki; Fukuda, Takashi; Mori, Katsumi; Hashimoto, Michio; Otosaka, Kayo
2004-01-01
Japan's system of Home Visit Nursing Care Stations (Station) began in 1991. To maintain the quality of services in home health nursing provided by Stations, reimbursement needs to account not only for the number of home visits, but also for the time and intensity of nursing services. This study aimed primarily to investigate the total work value and the three dimensions (time, mental effort, and physical effort) of actual visiting nursing services for the aged, and to quantify the contribution made by the three dimensions of nursing services to total work. The secondary purpose was to determine whether patient characteristics, nurse characteristics, and types of nursing services contributed to the variance in total work. Total work is defined as comprehensive work input of nursing services, with careful consideration given to both the intensity and duration of work. Self-report questionnaires about actual visiting nursing services, based on the Resource-Based Relative Value Scale, were answered by 32 nurses from three Stations in urban Yokohama, Japan. Regression analysis showed that time and intensity (physical effort and mental effort) explained 96% the variance in total work. Time alone accounted for only 39% of the variance in total work. Patient characteristics, nurse characteristics, and service type accounted for less variance in total work than did time and intensity. The study findings indicate that reimbursement of nursing services should reflect not only the time required for each visit, but also the intensity of nursing services provided, including mental effort and physical effort.
Lee, Dae-Hee; Shin, Young-Soo; Jeon, Jin-Ho; Suh, Dong-Won; Han, Seung-Beom
2014-08-01
The aim of this study was to investigate the mechanism underlying the development of gap differences in total knee arthroplasty using the navigation-assisted gap technique and to assess whether these gap differences have statistical significance. Ninety-two patients (105 knees) implanted with cruciate-retaining prostheses using the navigation-assisted gap balancing technique were prospectively analysed. Medial extension and flexion gaps and lateral extension and flexion gaps were measured at full extension and at 90° of flexion. Repeated measures analysis of variance was used to compare the mean values of these four gaps. The correlation coefficient between each pair of gaps was assessed using Pearson's correlation analysis. Mean intra-operative medial and lateral extension gaps were 20.6 ± 2.1 and 21.7 ± 2.2 mm, respectively, and mean intra-operative medial and lateral flexion gaps were 21.6 ± 2.7 and 22.1 ± 2.5 mm, respectively. The pairs of gaps differed significantly (P < 0.05 each), except for the difference between the medial flexion and lateral extension gaps (n.s.). All four gaps were significantly correlated with each other, with the highest correlation between the medial and lateral flexion gaps (r = 0.890, P < 0.001) and the lowest between the medial flexion and lateral extension gaps (r = 0.701, P < 0.001). Medial and lateral flexion and extension gaps created using the navigation-assisted gap technique differed significantly, although the differences between them were <2 mm, and the gaps were closely correlated. These narrow ranges of statistically acceptable gap differences and the strong correlations between gaps should be considered by surgeons, as should the risks of soft tissue over-release or unintentional increases in extension or flexion gap after preparation of the other gap.
Revealing Hidden Einstein-Podolsky-Rosen Nonlocality
NASA Astrophysics Data System (ADS)
Walborn, S. P.; Salles, A.; Gomes, R. M.; Toscano, F.; Souto Ribeiro, P. H.
2011-04-01
Steering is a form of quantum nonlocality that is intimately related to the famous Einstein-Podolsky-Rosen (EPR) paradox that ignited the ongoing discussion of quantum correlations. Within the hierarchy of nonlocal correlations appearing in nature, EPR steering occupies an intermediate position between Bell nonlocality and entanglement. In continuous variable systems, EPR steering correlations have been observed by violation of Reid’s EPR inequality, which is based on inferred variances of complementary observables. Here we propose and experimentally test a new criterion based on entropy functions, and show that it is more powerful than the variance inequality for identifying EPR steering. Using the entropic criterion our experimental results show EPR steering, while the variance criterion does not. Our results open up the possibility of observing this type of nonlocality in a wider variety of quantum states.
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
Patient Characteristics and Outcomes in Institutional and Community Long-Term Care.
ERIC Educational Resources Information Center
Braun, Kathryn L.; And Others
1991-01-01
Examined three-way relationships among patient characteristics, type of care (admission to nursing home or community setting), and 6-month outcomes of 352 long-term care patients. Found that patient characteristics influenced type of care received and that substantial portions of variance in outcomes were attributable to initial differences among…
Therapeutic Relationship of A-B Therapists as Perceived by Client and Therapist
ERIC Educational Resources Information Center
Bednar, Richard L.
1970-01-01
Analysis of variance was employed to evaluate the therapeutic relationship offered to schizophrenic and psychoneurotic patients by A-B type therapists. Results are discussed in context of the Whitehorn-Betz original claim that the differential therapeutic success of A-B type therapists with schizophrenic and psychoneurotic clients are a function…
40 CFR 125.72 - Early screening of applications for section 316(a) variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... necessary; (3) A general description of the type of data, studies, experiments and other information which... filed, the discharger shall submit for the Director's approval a detailed plan of study which the... nature and extent of the following type of information to be included in the plan of study: Biological...
40 CFR 125.72 - Early screening of applications for section 316(a) variances.
Code of Federal Regulations, 2011 CFR
2011-07-01
... necessary; (3) A general description of the type of data, studies, experiments and other information which... filed, the discharger shall submit for the Director's approval a detailed plan of study which the... nature and extent of the following type of information to be included in the plan of study: Biological...
Risk and the evolution of human exchange.
Kaplan, Hillard S; Schniter, Eric; Smith, Vernon L; Wilson, Bart J
2012-08-07
Compared with other species, exchange among non-kin is a hallmark of human sociality in both the breadth of individuals and total resources involved. One hypothesis is that extensive exchange evolved to buffer the risks associated with hominid dietary specialization on calorie dense, large packages, especially from hunting. 'Lucky' individuals share food with 'unlucky' individuals with the expectation of reciprocity when roles are reversed. Cross-cultural data provide prima facie evidence of pair-wise reciprocity and an almost universal association of high-variance (HV) resources with greater exchange. However, such evidence is not definitive; an alternative hypothesis is that food sharing is really 'tolerated theft', in which individuals possessing more food allow others to steal from them, owing to the threat of violence from hungry individuals. Pair-wise correlations may reflect proximity providing greater opportunities for mutual theft of food. We report a laboratory experiment of foraging and food consumption in a virtual world, designed to test the risk-reduction hypothesis by determining whether people form reciprocal relationships in response to variance of resource acquisition, even when there is no external enforcement of any transfer agreements that might emerge. Individuals can forage in a high-mean, HV patch or a low-mean, low-variance (LV) patch. The key feature of the experimental design is that individuals can transfer resources to others. We find that sharing hardly occurs after LV foraging, but among HV foragers sharing increases dramatically over time. The results provide strong support for the hypothesis that people are pre-disposed to evaluate gains from exchange and respond to unsynchronized variance in resource availability through endogenous reciprocal trading relationships.
Analytical approximations for effective relative permeability in the capillary limit
NASA Astrophysics Data System (ADS)
Rabinovich, Avinoam; Li, Boxiao; Durlofsky, Louis J.
2016-10-01
We present an analytical method for calculating two-phase effective relative permeability, krjeff, where j designates phase (here CO2 and water), under steady state and capillary-limit assumptions. These effective relative permeabilities may be applied in experimental settings and for upscaling in the context of numerical flow simulations, e.g., for CO2 storage. An exact solution for effective absolute permeability, keff, in two-dimensional log-normally distributed isotropic permeability (k) fields is the geometric mean. We show that this does not hold for krjeff since log normality is not maintained in the capillary-limit phase permeability field (Kj=k·krj) when capillary pressure, and thus the saturation field, is varied. Nevertheless, the geometric mean is still shown to be suitable for approximating krjeff when the variance of lnk is low. For high-variance cases, we apply a correction to the geometric average gas effective relative permeability using a Winsorized mean, which neglects large and small Kj values symmetrically. The analytical method is extended to anisotropically correlated log-normal permeability fields using power law averaging. In these cases, the Winsorized mean treatment is applied to the gas curves for cases described by negative power law exponents (flow across incomplete layers). The accuracy of our analytical expressions for krjeff is demonstrated through extensive numerical tests, using low-variance and high-variance permeability realizations with a range of correlation structures. We also present integral expressions for geometric-mean and power law average krjeff for the systems considered, which enable derivation of closed-form series solutions for krjeff without generating permeability realizations.
The distribution of genetic variance across phenotypic space and the response to selection.
Blows, Mark W; McGuigan, Katrina
2015-05-01
The role of adaptation in biological invasions will depend on the availability of genetic variation for traits under selection in the new environment. Although genetic variation is present for most traits in most populations, selection is expected to act on combinations of traits, not individual traits in isolation. The distribution of genetic variance across trait combinations can be characterized by the empirical spectral distribution of the genetic variance-covariance (G) matrix. Empirical spectral distributions of G from a range of trait types and taxa all exhibit a characteristic shape; some trait combinations have large levels of genetic variance, while others have very little genetic variance. In this study, we review what is known about the empirical spectral distribution of G and show how it predicts the response to selection across phenotypic space. In particular, trait combinations that form a nearly null genetic subspace with little genetic variance respond only inconsistently to selection. We go on to set out a framework for understanding how the empirical spectral distribution of G may differ from the random expectations that have been developed under random matrix theory (RMT). Using a data set containing a large number of gene expression traits, we illustrate how hypotheses concerning the distribution of multivariate genetic variance can be tested using RMT methods. We suggest that the relative alignment between novel selection pressures during invasion and the nearly null genetic subspace is likely to be an important component of the success or failure of invasion, and for the likelihood of rapid adaptation in small populations in general. © 2014 John Wiley & Sons Ltd.
Fritts, Andrea; Knights, Brent C.; Lafrancois, Toben D.; Bartsch, Lynn; Vallazza, Jon; Bartsch, Michelle; Richardson, William B.; Karns, Byron N.; Bailey, Sean; Kreiling, Rebecca
2018-01-01
Fatty acid and stable isotope signatures allow researchers to better understand food webs, food sources, and trophic relationships. Research in marine and lentic systems has indicated that the variance of these biomarkers can exhibit substantial differences across spatial and temporal scales, but this type of analysis has not been completed for large river systems. Our objectives were to evaluate variance structures for fatty acids and stable isotopes (i.e. δ13C and δ15N) of seston, threeridge mussels, hydropsychid caddisflies, gizzard shad, and bluegill across spatial scales (10s-100s km) in large rivers of the Upper Mississippi River Basin, USA that were sampled annually for two years, and to evaluate the implications of this variance on the design and interpretation of trophic studies. The highest variance for both isotopes was present at the largest spatial scale for all taxa (except seston δ15N) indicating that these isotopic signatures are responding to factors at a larger geographic level rather than being influenced by local-scale alterations. Conversely, the highest variance for fatty acids was present at the smallest spatial scale (i.e. among individuals) for all taxa except caddisflies, indicating that the physiological and metabolic processes that influence fatty acid profiles can differ substantially between individuals at a given site. Our results highlight the need to consider the spatial partitioning of variance during sample design and analysis, as some taxa may not be suitable to assess ecological questions at larger spatial scales.
Ghamari Kivi, Hossein; Mohammadipour Rik, Ne’mat; Sadeghi Movahhed, Fariba
2013-01-01
Objective: Thought-action fusion (TAF) refers to the tendency to assume incorrect causal relationship between one’s own thoughts and external reality, in which, thoughts and actions are treated as equivalents. This construct is present to development and maintenance of many psychological disorders. The aim of the present study was to predict obsessive-compulsive disorder (OCD) and its types, and major depressive disorder (MDD) with TAF and its levels. Methods: Two groups, included 50 persons with OCD and MDD, respectively, were selected by convenience sampling method in private and governmental psychiatric centers in Ardabil, Iran. Then, they responded to Beck Depression Inventory, Padua Inventory and TAF scale. Data were analysed using multiple regressions analysis by stepwise method. Results: TAF or its subtypes (moral TAF, likelihood-self TAF and likelihood-others TAF) can explain 14% of MDD variance (p < 0.01), 15% of OCD variance (p < 0.01), and 8-21% of OCD types variance (p < 0.05). Moral TAF had high levels in OCD and MDD. Conclusion: The construct of TAF is not specific factor for OCD, and it is present in MDD, too. Declaration of interest: None. PMID:24644509
Oregon ground-water quality and its relation to hydrogeological factors; a statistical approach
Miller, T.L.; Gonthier, J.B.
1984-01-01
An appraisal of Oregon ground-water quality was made using existing data accessible through the U.S. Geological Survey computer system. The data available for about 1,000 sites were separated by aquifer units and hydrologic units. Selected statistical moments were described for 19 constituents including major ions. About 96 percent of all sites in the data base were sampled only once. The sample data were classified by aquifer unit and hydrologic unit and analysis of variance was run to determine if significant differences exist between the units within each of these two classifications for the same 19 constituents on which statistical moments were determined. Results of the analysis of variance indicated both classification variables performed about the same, but aquifer unit did provide more separation for some constituents. Samples from the Rogue River basin were classified by location within the flow system and type of flow system. The samples were then analyzed using analysis of variance on 14 constituents to determine if there were significant differences between subsets classified by flow path. Results of this analysis were not definitive, but classification as to the type of flow system did indicate potential for segregating water-quality data into distinct subsets. (USGS)
Savalei, Victoria
2018-01-01
A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.
Inverse Optimization: A New Perspective on the Black-Litterman Model
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.
2014-01-01
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873
Best surgical option for arch extension of type B aortic dissection: the open approach
Kim, Joon Bum
2014-01-01
Arch extension of aortic dissection (AD) is reported to occur in 4-25% of patients presenting with acute type B AD. The DeBakey and Stanford classifications do not specifically account for this subset, however, recent studies have demonstrated that the prognosis of patients with arch extension in acute type B AD is virtually identical to that of others with type B AD. In this sense, it seems reasonable to extend the general management principles that are applied to classic acute type B AD even to patients with arch extension. This may be because even in patients with arch extension, most complications occur at locations distal to the arch, and therefore treatment of these patients is similar to that of complicated type B AD, namely thoracic endovascular aortic repair (TEVAR). Conversely, 10% of patients with acute type B AD and arch extension develop complications that are directly related to the arch pathology. This clinical scenario generally necessitates surgical arch repair through a sternotomy approach. The frozen elephant trunk technique combined with arch repair is a very reasonable option to treat this unique clinical entity that involves relatively distal locations of the aortic diseases. Combined arch and descending aorta replacement through thoracotomy is an alternative option particularly when the anatomical features of the target lesions are not suitable for a sternotomy approach or TEVAR. Nonetheless, the reported mortality associated with this approach has been exceedingly high. Hybrid arch repair is another consideration in treating these patients to reduce the treatment-related mortality and morbidity, especially when the arch pathology is limited to the distal part. Nevertheless, the safety and efficacy of this procedure in cases with more extensive arch involvement needs to be assessed in further studies in comparison with other treatment modalities. PMID:25133105
Garg, Ravi K; Afifi, Ahmed M; Gassner, Jennifer; Hartman, Michael J; Leverson, Glen; King, Timothy W; Bentz, Michael L; Gentry, Lindell R
2015-05-01
The broad spectrum of frontal bone fractures, including those with orbital and skull base extension, is poorly understood. We propose a novel classification scheme for frontal bone fractures. Maxillofacial CT scans of trauma patients were reviewed over a five year period, and frontal bone fractures were classified: Type 1: Frontal sinus fracture without vertical extension. Type 2: Vertical fracture through the orbit without frontal sinus involvement. Type 3: Vertical fracture through the frontal sinus without orbit involvement. Type 4: Vertical fracture through the frontal sinus and ipsilateral orbit. Type 5: Vertical fracture through the frontal sinus and contralateral or bilateral orbits. We also identified the depth of skull base extension, and performed a chart review to identify associated complications. 149 frontal bone fractures, including 51 non-vertical frontal sinus (Type 1, 34.2%) and 98 vertical (Types 2-5, 65.8%) fractures were identified. Vertical fractures penetrated the middle or posterior cranial fossa significantly more often than non-vertical fractures (62.2 v. 15.7%, p = 0.0001) and had a significantly higher mortality rate (18.4 v. 0%, p < 0.05). Vertical fractures with frontal sinus and orbital extension, and fractures that penetrated the middle or posterior cranial fossa had the strongest association with intracranial injuries, optic neuropathy, disability, and death (p < 0.05). Vertical frontal bone fractures carry a worse prognosis than frontal bone fractures without a vertical pattern. In addition, vertical fractures with extension into the frontal sinus and orbit, or with extension into the middle or posterior cranial fossa have the highest complication rate and mortality. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Types and Role Performance of the Extension Field Staff in a Midwestern University.
ERIC Educational Resources Information Center
Lionberger, Herbert F.; Pope, LaVern A.
To identify and describe extension role types, all educational assistants in the Small Farm Program, agricultural specialists, and community development and local government specialists in Missouri were asked to fill out questionnaires asking how frequently they performed 56 activities broadly representing what extension field staff might do.…
Francis, Saritha; Chandran, Sindhu Padinjareveedu; Nesheera, K K; Jacob, Jose
2017-05-01
Hyperinsulinemia is contributed by insulin resistance, hepatic insulin uptake, insulin secretion and rate of insulin degradation. Family history of type 2 diabetes mellitus has been reported to cause hyperinsulinemia. Correlation of fasting insulin with post glucose load Oral Glucose Tolerance Test (OGTT) insulin in young adults and their partitioning according to family history of type 2 diabetes. In this observational cross-sectional study, clinical evaluation and biochemical assays of insulin and diabetes related parameters, and secondary clinical influences on type 2 diabetes in volunteers were done for inclusion as participants (n=90) or their exclusion. Cut off levels of quantitative biochemical variables were fixed such that they included the effects of insulin resistance, but excluded other secondary clinical influences. Distribution was analysed by Shapiro-Wilk test; equality of variances by Levene's test; Log 10 transformations for conversion of groups to Gaussian distribution and for equality of variances in the groups compared. When the groups compared had Gaussian distribution and there was equality of variance, parametric methods were used. Otherwise, non parametric methods were used. Fasting insulin was correlating significantly with 30, 60 and 120 minute OGTT insulin showing that hyperinsulinemia in the fasting state was related to hyperinsulinemia in the post glucose load states. When fasting and post glucose load OGTT insulin were partitioned into those without and with family history of type 2 diabetes, maximum difference was seen in fasting insulin (p<0.001), followed by 120 (p=0.001) and 60 (p= 0.002) minute OGTT insulin. The 30 minute insulin could not be partitioned (p=0.574). Fasting, 60 and 120 minute OGTT insulin can be partitioned according to family history of type 2 diabetes, demonstrating stratification and heterogeneity in the insulin sample. Of these, fasting insulin was better partitioned and could be used for baseline reference interval calculations.
Pasture-feeding of Charolais steers influences skeletal muscle metabolism and gene expression.
Cassar-Malek, I; Jurie, C; Bernard, C; Barnola, I; Micol, D; Hocquette, J-F
2009-10-01
Extensive beef production systems on pasture are promoted to improve animal welfare and beef quality. This study aimed to compare the influence on muscle characteristics of two management approaches representative of intensive and extensive production systems. One group of 6 Charolais steers was fed maize-silage indoors and another group of 6 Charolais steers grazed on pasture. Activities of enzymes representative of glycolytic and oxidative (Isocitrate dehydrogenase [ICDH], citrate synthase [CS], hydroxyacyl-CoA dehydrogenase [HAD]) muscle metabolism were assessed in Rectus abdominis (RA) and Semitendinosus (ST) muscles. Activities of oxidative enzymes ICDH, CS and HAD were higher in muscles from grazing animals demonstrating a plasticity of muscle metabolism according to the production and feeding system. Gene expression profiling in RA and ST muscles was performed on both production groups using a multi-tissue bovine cDNA repertoire. Variance analysis showed an effect of the muscle type and of the production system on gene expression (P<0.001). A list of the 212 most variable genes according to the production system was established, of which 149 genes corresponded to identified genes. They were classified according to their gene function annotation mainly in the "protein metabolism and modification", "signal transduction", "cell cycle", "developmental processes" and "muscle contraction" biological processes. Selenoprotein W was found to be underexpressed in pasture-fed animals and could be proposed as a putative gene marker of the grass-based system. In conclusion, enzyme-specific adaptations and gene expression modifications were observed in response to the production system and some of them could be candidates for grazing or grass-feeding traceability.
Basin analysis of South Mozambique graben
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliffe, J.; Lerche, I.; De Buyl, M.
1987-05-01
Basin analysis of the South Mozambique graben between latitudes 25/sup 0/ and 26/sup 0/ and longitudes 34/sup 0/ and 35/sup 0/ demonstrates how modeling techniques may help to assess the oil potential of a speculative basin with only minimal seismic data. Two-dimensional restoration of the seismic profiles, using a backstripping and decompaction program on pseudowells linked with structural reconstruction, assesses the rift's two-phase extensional history. Since no well or thermal indicator data exist within the basin, the thermal history had to be derived from extensional models. The best fit of observed subsidence curves and those predicted by the models resultsmore » in values of lithospheric extension (gamma). The disagreement in observed and theoretical basement subsidence curves was minimized by taking a range of gamma for each model for each well. These extension factors were then used in each model's equations for paleoheat flux to derive the heat-flow histories. (It is noted that a systematic basinwide variance of gamma occurs.) The heat-flux histories were then used with a one-dimensional fluid flow/compaction model to calculate TTI values and oil windows. A Tissot generation model was applied to each formation in every well for kerogen Types I, II, and III. The results were contoured across the basin to assess possible oil- and gas-prone formations. The extensional, burial, and thermal histories are integrated into an overall basin development picture and provide an oil and gas provenance model. Thus they estimate the basinwide hydrocarbon potential and also gain insight into the additional data necessary to significantly decrease the uncertainty.« less
Turner, Rebecca M; Davey, Jonathan; Clarke, Mike J; Thompson, Simon G; Higgins, Julian PT
2012-01-01
Background Many meta-analyses contain only a small number of studies, which makes it difficult to estimate the extent of between-study heterogeneity. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, and offers advantages over conventional random-effects meta-analysis. To assist in this, we provide empirical evidence on the likely extent of heterogeneity in particular areas of health care. Methods Our analyses included 14 886 meta-analyses from the Cochrane Database of Systematic Reviews. We classified each meta-analysis according to the type of outcome, type of intervention comparison and medical specialty. By modelling the study data from all meta-analyses simultaneously, using the log odds ratio scale, we investigated the impact of meta-analysis characteristics on the underlying between-study heterogeneity variance. Predictive distributions were obtained for the heterogeneity expected in future meta-analyses. Results Between-study heterogeneity variances for meta-analyses in which the outcome was all-cause mortality were found to be on average 17% (95% CI 10–26) of variances for other outcomes. In meta-analyses comparing two active pharmacological interventions, heterogeneity was on average 75% (95% CI 58–95) of variances for non-pharmacological interventions. Meta-analysis size was found to have only a small effect on heterogeneity. Predictive distributions are presented for nine different settings, defined by type of outcome and type of intervention comparison. For example, for a planned meta-analysis comparing a pharmacological intervention against placebo or control with a subjectively measured outcome, the predictive distribution for heterogeneity is a log-normal (−2.13, 1.582) distribution, which has a median value of 0.12. In an example of meta-analysis of six studies, incorporating external evidence led to a smaller heterogeneity estimate and a narrower confidence interval for the combined intervention effect. Conclusions Meta-analysis characteristics were strongly associated with the degree of between-study heterogeneity, and predictive distributions for heterogeneity differed substantially across settings. The informative priors provided will be very beneficial in future meta-analyses including few studies. PMID:22461129
Coping and experiential avoidance: unique or overlapping constructs?
Karekla, Maria; Panayiotou, Georgia
2011-06-01
The present study examined associations between coping as measured by the Brief COPE and experiential avoidance as measured by the AAQ-II and the role of both constructs in predicting psychological distress and well-being. Specifically, associations between experiential avoidance and other types of coping were examined, and factor analysis addressed the question of whether experiential avoidance is part of coping or a related but independent construct. Results showed that experiential avoidance loads on the same factor as other emotion-focused and avoidant types of coping. The higher people are in experiential avoidance, the more they tend to utilize these types of coping strategies. Both experiential avoidance and coping predicted psychological distress and well-being, with most variance explained by coping but some additional variance explained by experiential avoidance. ANOVAS also showed gender differences in experiential avoidance and coping approaches. Results are discussed in light of previous relevant findings and future treatment relevant implications. Copyright © 2010 Elsevier Ltd. All rights reserved.
Jacobson, Bailey; Grant, James W A; Peres-Neto, Pedro R
2015-07-01
How individuals within a population distribute themselves across resource patches of varying quality has been an important focus of ecological theory. The ideal free distribution predicts equal fitness amongst individuals in a 1 : 1 ratio with resources, whereas resource defence theory predicts different degrees of monopolization (fitness variance) as a function of temporal and spatial resource clumping and population density. One overlooked landscape characteristic is the spatial distribution of resource patches, altering the equitability of resource accessibility and thereby the effective number of competitors. While much work has investigated the influence of morphology on competitive ability for different resource types, less is known regarding the phenotypic characteristics conferring relative ability for a single resource type, particularly when exploitative competition predominates. Here we used young-of-the-year rainbow trout (Oncorhynchus mykiss) to test whether and how the spatial distribution of resource patches and population density interact to influence the level and variance of individual growth, as well as if functional morphology relates to competitive ability. Feeding trials were conducted within stream channels under three spatial distributions of nine resource patches (distributed, semi-clumped and clumped) at two density levels (9 and 27 individuals). Average trial growth was greater in high-density treatments with no effect of resource distribution. Within-trial growth variance had opposite patterns across resource distributions. Here, variance decreased at low-population, but increased at high-population densities as patches became increasingly clumped as the result of changes in the levels of interference vs. exploitative competition. Within-trial growth was related to both pre- and post-trial morphology where competitive individuals were those with traits associated with swimming capacity and efficiency: larger heads/bodies/caudal fins and less angled pectoral fins. The different degrees of within-population growth variance at the same density level found here, as a function of spatial resource distribution, provide an explanation for the inconsistencies in within-site growth variance and population regulation often noted with regard to density dependence in natural landscapes. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
1951-05-01
prccedur&:s to be of hipn accuracy. Ambij;uity of subject responizes due to overlap of entries on tU,, record sheets vas negligible. Handwriting ...experimental variables on reading errors us carried out by analysis of variance methods. For this purpose it was convenient to consider different classes...on any scale - an error ofY one numbered division. For this reason, the result. of the analysis of variance of the /10’s errors by dial types may
Entropy as a measure of diffusion
NASA Astrophysics Data System (ADS)
Aghamohammadi, Amir; Fatollahi, Amir H.; Khorrami, Mohammad; Shariati, Ahmad
2013-10-01
The time variation of entropy, as an alternative to the variance, is proposed as a measure of the diffusion rate. It is shown that for linear and time-translationally invariant systems having a large-time limit for the density, at large times the entropy tends exponentially to a constant. For systems with no stationary density, at large times the entropy is logarithmic with a coefficient specifying the speed of the diffusion. As an example, the large-time behaviors of the entropy and the variance are compared for various types of fractional-derivative diffusions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bufoni, André Luiz, E-mail: bufoni@facc.ufrj.br; Oliveira, Luciano Basto; Rosa, Luiz Pinguelli
Highlights: • Projects are not financially attractive without registration as CDMs. • WM benchmarks and indicators are converging and reducing in variance. • A sensitivity analysis reveal that revenue has more of an effect on the financial results. • Results indicate that an extensive database would reduce WM project risk and capital costs. • Disclosure standards would make information more comparable worldwide. - Abstract: This study illustrates the financial analyses for demonstration and assessment of additionality presented in the project design (PDD) and enclosed documents of the 431 large Clean Development Mechanisms (CDM) classified as the ‘waste handling and disposalmore » sector’ (13) over the past ten years (2004–2014). The expected certified emissions reductions (CER) of these projects total 63.54 million metric tons of CO{sub 2}eq, where eight countries account for 311 projects and 43.36 million metric tons. All of the projects declare themselves ‘not financially attractive’ without CER with an estimated sum of negative results of approximately a half billion US$. The results indicate that WM benchmarks and indicators are converging and reducing in variance, and the sensitivity analysis reveals that revenues have a greater effect on the financial results. This work concludes that an extensive financial database with simple standards for disclosure would greatly diminish statement problems and make information more comparable, reducing the risk and capital costs of WM projects.« less
Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.
Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash
2017-01-01
Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.
Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction
Sayyari, Erfan; Mirarab, Siavash
2017-01-01
Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods. PMID:28800608
Franz, D; Franz, K; Roeder, N; Hörmann, K; Fischer, R-J; Alberty, Jürgen
2007-07-01
When the German DRG system was implemented there was some doubt about whether patients with extensive head and neck surgery would be properly accounted for. Significant efforts have therefore been invested in analysis and case allocation of those in this group. The object of this study was to investigate whether the changes within the German DRG system have led to improved case allocation. Cost data received from 25 ENT departments on 518 prospective documented cases of extensive head and neck surgery were compared with data from the German institute dealing with remuneration in hospitals (InEK). Statistical measures used by InEK were used to analyse the quality of the overall system and the homogeneity of the individual case groups. The reduction of variance of inlier costs improved by about 107.3% from the 2004 version to the 2007 version of the German DRG system. The average coefficient of cost homogeneity rose by about 9.7% in the same period. Case mix index and DRG revenues were redistributed from less extensive to the more complex operations. Hospitals with large numbers of extensive operations and university hospitals will gain most benefit from this development. Appropriate case allocation of extensive operations on the head and neck has been improved by the continued development of the German DRG system culminating in the 2007 version. Further adjustments will be needed in the future.
What Does Charter School Mean to You? A Look at Louisiana's Charter Enrollment by Charter Type
ERIC Educational Resources Information Center
Crutchfield, Jandel
2015-01-01
This article examines the intersection of race, socioeconomic status (SES), and charter type/admission practices in Louisiana charter schools. This study used publicly available Department of Education data to compile the sample of charter school demographic information. A one-way Multiple Analysis of Variance (MANOVA) was conducted using race and…
2014-01-01
Background This study examined whether passive hamstring tissue stiffness and/or stretch tolerance explain the relationship between sex and hamstring extensibility. Methods Ninety healthy participants, 45 men and 45 women (mean ± SD; age 24.6 ± 5.9 years, height 1.72 ± 0.09 m, weight 74.6 ± 14.1 kg) volunteered for this study. The instrumented straight leg raise was used to determine hamstring extensibility and allow measurement of stiffness and stretch tolerance (visual analog pain score, VAS). Results Hamstring extensibility was 9.9° greater in women compared to men (p = 0.003). VAS scores were 16 mm lower in women (p = 0.001). Maximal stiffness (maximal applied torque) was not different between men and women (p = 0.42). Passive stiffness (slope from 20-50° hip flexion) was 0.09 Nm.°-1 lower in women (p = 0.025). For women, linear and stepwise regression showed that no predictor variables were associated with hamstring extensibility (adjusted r2 = -0.03, p = 0.61). For men, 44% of the variance in hamstring extensibility was explained by VAS and maximal applied torque (adjusted r2 = 0.44, p < 0.001), with 41% of the model accounted for by the relationship between higher VAS scores and lower extensibility (standardized β coefficient = -0.64, p < 0.001). Conclusions The results of this study suggest that stretch tolerance and not passive stiffness explains hamstring extensibility, but this relationship is only manifest in men. PMID:25000977
Powers, Christopher M; Beneck, George J; Kulig, Kornelia; Landel, Robert F; Fredericson, Michael
2008-04-01
Posterior-to-anterior (PA) mobilization and press-up exercises are common physical therapy interventions used to treat low back pain. The purpose of this study was to examine the immediate effects of PA mobilization and a press-up exercise on pain with standing extension and lumbar extension in people with nonspecific low back pain. The study participants were 30 adults (19 women and 11 men) who were 18 to 45 years of age and had a diagnosis of nonspecific low back pain. Lumbar segmental extension during a press-up maneuver was measured by dynamic magnetic resonance imaging prior to and immediately following a single session of either PA spinal mobilization or a press-up exercise. Pain scores before and after intervention were recorded with a visual analog scale. Differences between the treatment groups in pain and total lumbar extension were compared over time by use of a 2-way analysis of variance. Following both interventions, there was a significant reduction in the average pain scores for both groups (significant main effect for time, no interaction). Similarly, total lumbar extension significantly increased in both the PA mobilization group and the press-up group (significant main effect for time, no interaction). No significant differences between the 2 interventions in pain or lumbar extension were found. The findings of this study support the use of PA mobilization and a press-up exercise for improving lumbar extension in people with nonspecific low back pain. Although statistically significant within-group changes in pain were detected, the clinical meaningfulness of these changes is questionable.
German, Alina; Livshits, Gregory; Peter, Inga; Malkin, Ida; Dubnov, Jonathan; Akons, Hannah; Shmoish, Michael; Hochberg, Ze'ev
2015-03-01
Using a twins study, we sought to assess the contribution of genetic against environmental factor as they affect the age at transition from infancy to childhood (ICT). The subjects were 56 pairs of monozygotic twins, 106 pairs of dizygotic twins, and 106 pairs of regular siblings (SBs), for a total of 536 children. Their ICT was determined, and a variance component analysis was implemented to estimate components of the familial variance, with simultaneous adjustment for potential covariates. We found substantial contribution of the common environment shared by all types of SBs that explained 27.7% of the total variance in ICT, whereas the common twin environment explained 9.2% of the variance, gestational age 3.5%, and birth weight 1.8%. In addition, 8.7% was attributable to sex difference, but we found no detectable contribution of genetic factors to inter-individual variation in ICT age. Developmental plasticity impacts much of human growth. Here we show that of the ∼50% of the variance provided to adult height by the ICT, 42.2% is attributable to adaptive cues represented by shared twin and SB environment, with no detectable genetic involvement. Copyright © 2015 Elsevier Inc. All rights reserved.
Hierarchical multivariate covariance analysis of metabolic connectivity.
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-12-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI).
On the reliability of Shewhart-type control charts for multivariate process variability
NASA Astrophysics Data System (ADS)
Djauhari, Maman A.; Salleh, Rohayu Mohd; Zolkeply, Zunnaaim; Li, Lee Siaw
2017-05-01
We show that in the current practice of multivariate process variability monitoring, the reliability of Shewhart-type control charts cannot be measured except when the sub-group size n tends to infinity. However, the requirement of large n is meaningless not only in manufacturing industry where n is small but also in service industry where n is moderate. In this paper, we introduce a new definition of control limits in the two most appreciated control charts in the literature, i.e., the improved generalized variance chart (IGV-chart) and vector variance chart (VV-chart). With the new definition of control limits, the reliability of the control charts can be determined. Some important properties of new control limits will be derived and the computational technique of probability of false alarm will be delivered.
NASA Technical Reports Server (NTRS)
Tomaine, R. L.
1976-01-01
Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Revealing hidden Einstein-Podolsky-Rosen nonlocality.
Walborn, S P; Salles, A; Gomes, R M; Toscano, F; Souto Ribeiro, P H
2011-04-01
Steering is a form of quantum nonlocality that is intimately related to the famous Einstein-Podolsky-Rosen (EPR) paradox that ignited the ongoing discussion of quantum correlations. Within the hierarchy of nonlocal correlations appearing in nature, EPR steering occupies an intermediate position between Bell nonlocality and entanglement. In continuous variable systems, EPR steering correlations have been observed by violation of Reid's EPR inequality, which is based on inferred variances of complementary observables. Here we propose and experimentally test a new criterion based on entropy functions, and show that it is more powerful than the variance inequality for identifying EPR steering. Using the entropic criterion our experimental results show EPR steering, while the variance criterion does not. Our results open up the possibility of observing this type of nonlocality in a wider variety of quantum states. © 2011 American Physical Society
Nasal airway and septal variation in unilateral and bilateral cleft lip and palate.
Starbuck, John M; Friel, Michael T; Ghoneima, Ahmed; Flores, Roberto L; Tholpady, Sunil; Kula, Katherine
2014-10-01
Cleft lip and palate (CLP) affects the dentoalveolar and nasolabial facial regions. Internal and external nasal dysmorphology may persist in individuals born with CLP despite surgical interventions. 7-18 year old individuals born with unilateral and bilateral CLP (n = 50) were retrospectively assessed using cone beam computed tomography. Anterior, middle, and posterior nasal airway volumes were measured on each facial side. Septal deviation was measured at the anterior and posterior nasal spine, and the midpoint between these two locations. Data were evaluated using principal components analysis (PCA), multivariate analysis of variance (MANOVA), and post-hoc ANOVA tests. PCA results show partial separation in high dimensional space along PC1 (48.5% variance) based on age groups and partial separation along PC2 (29.8% variance) based on CLP type and septal deviation patterns. MANOVA results indicate that age (P = 0.007) and CLP type (P ≤ 0.001) significantly affect nasal airway volume and septal deviation. ANOVA results indicate that anterior nasal volume is significantly affected by age (P ≤ 0.001), whereas septal deviation patterns are significantly affected by CLP type (P ≤ 0.001). Age and CLP type affect nasal airway volume and septal deviation patterns. Nasal airway volumes tend to be reduced on the clefted sides of the face relative to non-clefted sides of the face. Nasal airway volumes tend to strongly increase with age, whereas septal deviation values tend to increase only slightly with age. These results suggest that functional nasal breathing may be impaired in individuals born with the unilateral and bilateral CLP deformity. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wheeler, David C.; Waller, Lance A.
2009-03-01
In this paper, we compare and contrast a Bayesian spatially varying coefficient process (SVCP) model with a geographically weighted regression (GWR) model for the estimation of the potentially spatially varying regression effects of alcohol outlets and illegal drug activity on violent crime in Houston, Texas. In addition, we focus on the inherent coefficient shrinkage properties of the Bayesian SVCP model as a way to address increased coefficient variance that follows from collinearity in GWR models. We outline the advantages of the Bayesian model in terms of reducing inflated coefficient variance, enhanced model flexibility, and more formal measuring of model uncertainty for prediction. We find spatially varying effects for alcohol outlets and drug violations, but the amount of variation depends on the type of model used. For the Bayesian model, this variation is controllable through the amount of prior influence placed on the variance of the coefficients. For example, the spatial pattern of coefficients is similar for the GWR and Bayesian models when a relatively large prior variance is used in the Bayesian model.
NASA Astrophysics Data System (ADS)
Loganathan, K.; Ahamed, A. Jafar
2017-12-01
The study of groundwater in Amaravathi River basin of Karur District resulted in large geochemical data set. A total of 24 water samples were collected and analyzed for physico-chemical parameters, and the abundance of cation and anion concentrations was in the following order: Na+ > Ca2+ > Mg2+ > K+ = Cl- > HCO3 - > SO4 2-. Correlation matrix shows that the basic ionic chemistry is influenced by Na+, Ca2+, Mg2+, and Cl-, and also suggests that the samples contain Na+-Cl-, Ca2+-Cl- an,d mixed Ca2+-Mg2+-Cl- types of water. HCO3 -, SO4 2-, and F- association is less than that of other parameters due to poor or less available of bearing minerals. PCA extracted six components, which are accountable for the data composition explaining 81% of the total variance of the data set and allowed to set the selected parameters according to regular features as well as to evaluate the frequency of each group on the overall variation in water quality. Cluster analysis results show that groundwater quality does not vary extensively as a function of seasons, but shows two main clusters.
Heteroskedasticity as a leading indicator of desertification in spatially explicit data.
Seekell, David A; Dakos, Vasilis
2015-06-01
Regime shifts are abrupt transitions between alternate ecosystem states including desertification in arid regions due to drought or overgrazing. Regime shifts may be preceded by statistical anomalies such as increased autocorrelation, indicating declining resilience and warning of an impending shift. Tests for conditional heteroskedasticity, a type of clustered variance, have proven powerful leading indicators for regime shifts in time series data, but an analogous indicator for spatial data has not been evaluated. A spatial analog for conditional heteroskedasticity might be especially useful in arid environments where spatial interactions are critical in structuring ecosystem pattern and process. We tested the efficacy of a test for spatial heteroskedasticity as a leading indicator of regime shifts with simulated data from spatially extended vegetation models with regular and scale-free patterning. These models simulate shifts from extensive vegetative cover to bare, desert-like conditions. The magnitude of spatial heteroskedasticity increased consistently as the modeled systems approached a regime shift from vegetated to desert state. Relative spatial autocorrelation, spatial heteroskedasticity increased earlier and more consistently. We conclude that tests for spatial heteroskedasticity can contribute to the growing toolbox of early warning indicators for regime shifts analyzed with spatially explicit data.
Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.
Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E
2018-03-01
Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.
Quality of life and independent living and working levels of farmers and ranchers with disabilities.
Jackman, Danielle M; Fetsch, Robert J; Collins, Christina L
2016-04-01
The status of farmers and ranchers with disabilities has been understudied. Understanding this population's quality of life (QOL) and independent living and working (ILW) levels have the potential to be informative for changes in public policy and service provision. To assess QOL levels among farmers and ranchers with disabilities as well as explore a conceptual model of ILW accounting for variance in QOL levels. Participants (N = 398) included farmers and ranchers with varying disabilities. Descriptive information was gathered using the McGill Quality of Life (MQOL) and ILW measures. The MQOL measure produces an objective and comprehensive profile of one's QOL across several domains. ILW was used to account for variance in QOL scores. We also examined whether there were any differences in QOL and or ILW based on type of disability. There were no differences in type of disability and QOL levels. The mean QOL level was 5.50 (SD = 1.67; N = 398). The sample rated support and existential well-being the highest among the QOL subscales, which confounds previous research. Further, age group and ILW accounted for 16.2% of the variance in QOL levels, P < .001. With this sample of farmers and ranchers with disabilities, age group and ILW account for significant variance in QOL. Health professionals can use these findings to support and assess improvements in clients' ILW, self-determination, and QOL. Future research is needed to explore further the effects of QOL and ILW in this population. Copyright © 2016 Elsevier Inc. All rights reserved.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Tabassum, Rubina; Sivadas, Ambily; Agrawal, Vartika; Tian, Haozheng; Arafat, Dalia; Gibson, Greg
2015-08-13
Personalized medicine is predicated on the notion that individual biochemical and genomic profiles are relatively constant in times of good health and to some extent predictive of disease or therapeutic response. We report a pilot study quantifying gene expression and methylation profile consistency over time, addressing the reasons for individual uniqueness, and its relation to N = 1 phenotypes. Whole blood samples from four African American women, four Caucasian women, and four Caucasian men drawn from the Atlanta Center for Health Discovery and Well Being study at three successive 6-month intervals were profiled by RNA-Seq, miRNA-Seq, and Illumina Methylation 450 K arrays. Standard regression approaches were used to evaluate the proportion of variance for each type of omic measure among individuals, and to quantify correlations among measures and with clinical attributes related to wellness. Longitudinal omic profiles were in general highly consistent over time, with an average of 67 % variance in transcript abundance, 42 % in CpG methylation level (but 88 % for the most differentiated CpG per gene), and 50 % in miRNA abundance among individuals, which are all comparable to 74 % variance among individuals for 74 clinical traits. One third of the variance could be attributed to differential blood cell type abundance, which was also fairly stable over time, and a lesser amount to expression quantitative trait loci (eQTL) effects. Seven conserved axes of covariance that capture diverse aspects of immune function explained over half of the variance. These axes also explained a considerable proportion of individually extreme transcript abundance, namely approximately 100 genes that were significantly up-regulated or down-regulated in each person and were in some cases enriched for relevant gene activities that plausibly associate with clinical attributes. A similar fraction of genes had individually divergent methylation levels, but these did not overlap with the transcripts, and fewer than 20 % of genes had significantly correlated methylation and gene expression. People express an "omic personality" consisting of peripheral blood transcriptional and epigenetic profiles that are constant over the course of a year and reflect various types of immune activity. Baseline genomic profiles can provide a window into the molecular basis of traits that might be useful for explaining medical conditions or guiding personalized health decisions.
Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.
Azzari, Lucio; Foi, Alessandro
2014-08-01
We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.
NASA Technical Reports Server (NTRS)
Aliev, N.; Alimov, T.; Kakhkharov, M.; Makhmudov, B. M.; Rakhimova, N.; Tashpulatov, R.; Kalmykov, N. N.; Khristiansen, G. B.; Prosin, V. V.
1985-01-01
The Samarkand extensive air showers (EAS) array was used to measure the mean and individual lateral distribution functions (LDF) of EAS Cerenkov light. The analysis of the individual parameters b showed that the mean depth of EAS maximum and the variance of the depth distribution of maxima of EAS with energies of approx. 2x10 to the 15th power eV can properly be described in terms of Kaidalov-Martirosyan quark-gluon string model (QGSM).
Use of high-order spectral moments in Doppler weather radar
NASA Astrophysics Data System (ADS)
di Vito, A.; Galati, G.; Veredice, A.
Three techniques to estimate the skewness and curtosis of measured precipitation spectra are evaluated. These are: (1) an extension of the pulse-pair technique, (2) fitting the autocorrelation function with a least square polynomial and differentiating it, and (3) the autoregressive spectral estimation. The third technique provides the best results but has an exceedingly large computation burden. The first technique does not supply any useful results due to the crude approximation of the derivatives of the ACF. The second technique requires further study to reduce its variance.
Zhong, Daibin; Menge, David M; Temu, Emmanuel A; Chen, Hong; Yan, Guiyun
2006-07-01
The yellow fever mosquito Aedes aegypti has been the subject of extensive genetic research due to its medical importance and the ease with which it can be manipulated in the laboratory. A molecular genetic linkage map was constructed using 148 amplified fragment length polymorphism (AFLP) and six single-strand conformation polymorphism (SSCP) markers. Eighteen AFLP primer combinations were used to genotype two reciprocal F2 segregating populations. Each primer combination generated an average of 8.2 AFLP markers eligible for linkage mapping. The length of the integrated map was 180.9 cM, giving an average marker resolution of 1.2 cM. Composite interval mapping revealed a total of six QTL significantly affecting Plasmodium susceptibility in the two reciprocal crosses of Ae. aegypti. Two common QTL on linkage group 2 were identified in both crosses that had similar effects on the phenotype, and four QTL were unique to each cross. In one cross, the four main QTL accounted for 64% of the total phenotypic variance, and digenic epistasis explained 11.8% of the variance. In the second cross, the four main QTL explained 66% of the variance, and digenic epistasis accounted for 16% of the variance. The actions of these QTL were either dominance or underdominance. Our results indicated that at least three new QTL were mapped on chromosomes 1 and 3. The polygenic nature of susceptibility to P. gallinaceum and epistasis are important factors for significant variation within or among mosquito strains. The new map provides additional information useful for further genetic investigation, such as identification of new genes and positional cloning.
Memory is Not Enough: The Neurobiological Substrates of Dynamic Cognitive Reserve.
Serra, Laura; Bruschini, Michela; Di Domenico, Carlotta; Gabrielli, Giulia Bechi; Marra, Camillo; Caltagirone, Carlo; Cercignani, Mara; Bozzali, Marco
2017-01-01
Changes in the residual memory variance are considered as a dynamic aspect of cognitive reserve (d-CR). We aimed to investigate for the first time the neural substrate associated with changes in the residual memory variance overtime in patients with amnestic mild cognitive impairment (aMCI). Thirty-four aMCI patients followed-up for 36 months and 48 healthy elderly individuals (HE) were recruited. All participants underwent 3T MRI, collecting T1-weighted images for voxel-based morphometry (VBM). They underwent an extensive neuropsychological battery, including six episodic memory tests. In patients and controls, factor analyses were used on the episodic memory scores to obtain a composite memory score (C-MS). Partial Least Square analyses were used to decompose the variance of C-MS in latent variables (LT scores), accounting for demographic variables and for the general cognitive efficiency level; linear regressions were applied on LT scores, striping off any contribution of general cognitive abilities, to obtain the residual value of memory variance, considered as an index of d-CR. LT scores and d-CR were used in discriminant analysis, in patients only. Finally, LT scores and d-CR were used as variable of interest in VBM analysis. The d-CR score was not able to correctly classify patients. In both aMCI patients and HE, LT1st and d-CR scores showed correlations with grey matter volumes in common and in specific brain areas. Using CR measures limited to assess memory function is likely less sensitive to detect the cognitive decline and predict the evolution of Alzheimer's disease. In conclusion, d-CR needs a measure of general cognition to identify conversion to Alzheimer's disease efficiently.
Risk and the evolution of human exchange
Kaplan, Hillard S.; Schniter, Eric; Smith, Vernon L.; Wilson, Bart J.
2012-01-01
Compared with other species, exchange among non-kin is a hallmark of human sociality in both the breadth of individuals and total resources involved. One hypothesis is that extensive exchange evolved to buffer the risks associated with hominid dietary specialization on calorie dense, large packages, especially from hunting. ‘Lucky’ individuals share food with ‘unlucky’ individuals with the expectation of reciprocity when roles are reversed. Cross-cultural data provide prima facie evidence of pair-wise reciprocity and an almost universal association of high-variance (HV) resources with greater exchange. However, such evidence is not definitive; an alternative hypothesis is that food sharing is really ‘tolerated theft’, in which individuals possessing more food allow others to steal from them, owing to the threat of violence from hungry individuals. Pair-wise correlations may reflect proximity providing greater opportunities for mutual theft of food. We report a laboratory experiment of foraging and food consumption in a virtual world, designed to test the risk-reduction hypothesis by determining whether people form reciprocal relationships in response to variance of resource acquisition, even when there is no external enforcement of any transfer agreements that might emerge. Individuals can forage in a high-mean, HV patch or a low-mean, low-variance (LV) patch. The key feature of the experimental design is that individuals can transfer resources to others. We find that sharing hardly occurs after LV foraging, but among HV foragers sharing increases dramatically over time. The results provide strong support for the hypothesis that people are pre-disposed to evaluate gains from exchange and respond to unsynchronized variance in resource availability through endogenous reciprocal trading relationships. PMID:22513855
Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M
2015-11-01
This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com. Copyright © 2015 Elsevier Inc. All rights reserved.
Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie
2018-05-01
The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.
Effect of toxicity of Ag nanoparticles on SERS spectral variance of bacteria
NASA Astrophysics Data System (ADS)
Cui, Li; Chen, Shaode; Zhang, Kaisong
2015-02-01
Ag nanoparticles (NPs) have been extensively utilized in surface-enhanced Raman scattering (SERS) spectroscopy for bacterial identification. However, Ag NPs are toxic to bacteria. Whether such toxicity can affect SERS features of bacteria and interfere with bacterial identification is still unknown and needed to explore. Here, by carrying out a comparative study on non-toxic Au NPs with that on toxic Ag NPs, we investigated the influence of nanoparticle concentration and incubation time on bacterial SERS spectral variance, both of which were demonstrated to be closely related to the toxicity of Ag NPs. Sensitive spectral alterations were observed on Ag NPs with increase of NPs concentration or incubation time, accompanied with an obvious decrease in number of viable bacteria. In contrast, SERS spectra and viable bacterial number on Au NPs were rather constant under the same conditions. A further analysis on spectral changes demonstrated that it was cell response (i.e. metabolic activity or death) to the toxicity of Ag NPs causing spectral variance. However, biochemical responses to the toxicity of Ag were very different in different bacteria, indicating the complex toxic mechanism of Ag NPs. Ag NPs are toxic to a great variety of organisms, including bacteria, fungi, algae, protozoa etc., therefore, this work will be helpful in guiding the future application of SERS technique in various complex biological systems.
Land Use and Environmental Variability Impacts on the Phenology of Arid Agro-Ecosystems.
Romo-Leon, Jose Raul; van Leeuwen, Willem J D; Castellanos-Villegas, Alejandro
2016-02-01
The overexploitation of water resources in arid environments often results in abandonment of large extensions of agricultural lands, which may (1) modify phenological trends, and (2) alter the sensitivity of specific phenophases to environmental triggers. In Mexico, current governmental policies subsidize restoration efforts, to address ecological degradation caused by abandonments; however, there is a need for new approaches to assess their effectiveness. Addressing this, we explore a method to monitor and assess (1) land surface phenology trends in arid agro-ecosystems, and (2) the effect of climatic factors and restoration treatments on the phenology of abandoned agricultural fields. We used 16-day normalized difference vegetation index composites from the moderate resolution imaging spectroradiometer from 2000 to 2009 to derive seasonal phenometrics. We then derived phenoclimatic variables and land cover thematic maps, to serve as a set of independent factors that influence vegetation phenology. We conducted a multivariate analysis of variance to analyze phenological trends among land cover types, and developed multiple linear regression models to assess influential climatic factors driving phenology per land cover analyzed. Our results suggest that the start and length of the growing season had different responses to environmental factors depending on land cover type. Our analysis also suggests possible establishment of arid adapted species (from surrounding ecosystems) in abandoned fields with longer times since abandonment. Using this approach, we were able increase our understanding on how climatic factors influence phenology on degraded arid agro-ecosystems, and how this systems evolve after disturbance.
ERIC Educational Resources Information Center
Ling, Guangming
2017-01-01
To investigate whether the type of keyboard used in exams introduces any construct-irrelevant variance to the TOEFL iBT Writing scores, we surveyed 17,040 TOEFL iBT examinees from 24 countries on their keyboard-related perceptions and preferences and analyzed the survey responses together with their test scores. Results suggest that controlling…
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
An alternative approach to confidence interval estimation for the win ratio statistic.
Luo, Xiaodong; Tian, Hong; Mohanty, Surya; Tsai, Wei Yann
2015-03-01
Pocock et al. (2012, European Heart Journal 33, 176-182) proposed a win ratio approach to analyzing composite endpoints comprised of outcomes with different clinical priorities. In this article, we establish a statistical framework for this approach. We derive the null hypothesis and propose a closed-form variance estimator for the win ratio statistic in all pairwise matching situation. Our simulation study shows that the proposed variance estimator performs well regardless of the magnitude of treatment effect size and the type of the joint distribution of the outcomes. © 2014, The International Biometric Society.
Extended Poisson process modelling and analysis of grouped binary data.
Faddy, Malcolm J; Smith, David M
2012-05-01
A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Unitary evolution of the quantum Universe with a Brown-Kuchař dust
NASA Astrophysics Data System (ADS)
Maeda, Hideki
2015-12-01
We study the time evolution of a wave function for the spatially flat Friedmann-Lemaître-Robertson-Walker Universe governed by the Wheeler-DeWitt equation in both analytical and numerical methods. We consider a Brown-Kuchař dust as a matter field in order to introduce a ‘clock’ in quantum cosmology and adopt the Laplace-Beltrami operator-ordering. The Hamiltonian operator admits an infinite number of self-adjoint extensions corresponding to a one-parameter family of boundary conditions at the origin in the minisuperspace. For any value of the extension parameter in the boundary condition, the evolution of a wave function is unitary and the classical initial singularity is avoided and replaced by the big bounce in the quantum system. Exact wave functions show that the expectation value of the spatial volume of the Universe obeys the classical-time evolution in the late time but its variance diverges.
Variance change point detection for fractional Brownian motion based on the likelihood ratio test
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz
2018-01-01
Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.
Efficient prediction designs for random fields.
Müller, Werner G; Pronzato, Luc; Rendas, Joao; Waldl, Helmut
2015-03-01
For estimation and predictions of random fields, it is increasingly acknowledged that the kriging variance may be a poor representative of true uncertainty. Experimental designs based on more elaborate criteria that are appropriate for empirical kriging (EK) are then often non-space-filling and very costly to determine. In this paper, we investigate the possibility of using a compound criterion inspired by an equivalence theorem type relation to build designs quasi-optimal for the EK variance when space-filling designs become unsuitable. Two algorithms are proposed, one relying on stochastic optimization to explicitly identify the Pareto front, whereas the second uses the surrogate criteria as local heuristic to choose the points at which the (costly) true EK variance is effectively computed. We illustrate the performance of the algorithms presented on both a simple simulated example and a real oceanographic dataset. © 2014 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Rusakov, Oleg; Laskin, Michael
2017-06-01
We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.
Body Composition and Somatotype of Male and Female Nordic Skiers
ERIC Educational Resources Information Center
Sinning, Wayne E.; And Others
1977-01-01
Anthropometric measurements (body composition and somatotype characteristics) for male and female Nordic skiers showed small values for measures of variance, suggesting that the subjects represented a select body type for the sport. (Author/MJB)
24 CFR 990.245 - Types of appeals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... variance of ten percent or greater in its PEL. (d) Appeal for changing market conditions. A PHA may appeal... may appeal its PEL if it can produce actual project cost data derived from actual asset management, as...
Do Different Facets of Impulsivity Predict Different Types of Aggression?
Derefinko, Karen; DeWall, C. Nathan; Metze, Amanda V.; Walsh, Erin C.; Lynam, Donald R.
2011-01-01
The current study examined the relations between impulsivity-related traits (as assessed by the UPPS-P Impulsive Behavior Scale) and aggressive behaviors. Results indicated that UPPS-P Lack of Premeditation and Sensation Seeking were important in predicting general violence. In contrast, UPPS-P Urgency was most useful in predicting intimate partner violence. To further explore relations between intimate partner violence and Urgency, a measure of autonomic response to pleasant and aversive stimuli and facets of Neuroticism from the NEO PI-R were used as control variables. Autonomic responsivity was correlated with intimate partner violence at the zero-order level, and predicted significant variance in intimate partner violence in regression equations. However, UPPS-P Urgency was able to account for unique variance in intimate partner violence above and beyond measures of Neuroticism and arousal. Implications regarding the use of a multifaceted conceptualization of impulsivity in the prediction of different types of violent behavior are discussed. PMID:21259270
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Johnson, Seth R.; Remec, Igor
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less
Barko, V.A.; Herzog, D.P.; O'Connell, M. T.
2006-01-01
We examined data collected on fish assemblage structure among three differing floodplain types (broad, moderate, and narrow) during the 1993 flood in the unimpounded reach of the upper Mississippi River. This 500 year flood event provided a unique opportunity to investigate fish-floodplain function because the main river channel is otherwise typically disjunct from approximately 82% of its floodplain by an extensive levee system. Fishes were sampled during three separate periods, and 42 species of adult and young-of-the-year (YOY) fishes were captured. Analysis of similarity (ANOSIM) revealed a significant and distinguishable difference between both adult and YOY assemblage structure among the three floodplain types. Analysis of variance revealed that Secchi transparency, turbidity, water velocity, and dissolved oxygen were significantly different among the floodplain types. However, only depth of gear deployment and Secchi transparency were significantly correlated with adult assemblage structure. None of these variables were significantly correlated with YOY assemblage structure. The numerically abundant families (adult and YOY catches combined) on the floodplain included Centrarchidae, Ictularidae, and Cyprinidae. Both native and non-native fishes were captured on the floodplain, and several of the numerically abundant species that were captured on the floodplain peaked in catch-per-unit-effort 1-3 years after the 1993 flood event. This suggests that some species may have used flooded terrestrial habitat for spawning, feeding, or both. The findings from our study provide much needed insight into fish-floodplain function in a temperate, channelized river system and suggest that lateral connectivity of the main river channel to less degraded reaches of its floodplain should become a management priority not only to maintain faunal biodiversity but also potentially reduce the impacts of non-native species in large river systems.
Schmitt, Thomas; Haubrich, Karola
2008-05-01
The distribution of the mountain coniferous forest biome in Europe throughout time is not sufficiently understood. One character species of this habitat type is the large ringlet, Erebia euryale well reflecting the extension of this biome today, and the genetic differentiation of this species among and within mountain systems may unravel the late Pleistocene history of this habitat type. We therefore analysed the allozyme pattern of 381 E. euryale individuals from 11 populations in four different European mountain systems (Pyrenees, Alps, Carpathians, Rila). All loci analysed were polymorphic. The mean F(ST) over all samples was high (20%). Furthermore, the mean genetic distance among samples was quite high (0.049). We found four different groups well supported by cluster analyses, bootstraps and hierarchical variance analyses: Pyrenees, western Alps, eastern Alps and southeastern Europe (Carpathians and Rila). The genetic diversity of the populations was highest in the southeastern European group and stepwise decreased westwards. Interestingly, the populations from Bulgaria and Romania were almost identical; therefore, we assume that they were not separated by the Danube Valley, at least during the last ice age. On the contrary, the differentiation among the three western Alps populations was considerable. For all these reasons, we assume that (i) the most important refugial area for the coniferous mountain forest biome in Europe has been located in southeastern Europe including at least parts of the Carpathians and the Bulgarian mountains; (ii) important refugial areas for this biome existed at the southeastern edge of the Alps; (iii) fragments of this habitat types survived along the southwestern Alps, but in a more scattered distribution; and (iv) relatively small relicts have persisted somewhere at the foothills of the Pyrenees.
Cox, Simon R.; MacPherson, Sarah E.; Ferguson, Karen J.; Nissan, Jack; Royle, Natalie A.; MacLullich, Alasdair M.J.; Wardlaw, Joanna M.; Deary, Ian J.
2014-01-01
Both general fluid intelligence (gf) and performance on some ‘frontal tests’ of cognition decline with age. Both types of ability are at least partially dependent on the integrity of the frontal lobes, which also deteriorate with age. Overlap between these two methods of assessing complex cognition in older age remains unclear. Such overlap could be investigated using inter-test correlations alone, as in previous studies, but this would be enhanced by ascertaining whether frontal test performance and gf share neurobiological variance. To this end, we examined relationships between gf and 6 frontal tests (Tower, Self-Ordered Pointing, Simon, Moral Dilemmas, Reversal Learning and Faux Pas tests) in 90 healthy males, aged ~ 73 years. We interpreted their correlational structure using principal component analysis, and in relation to MRI-derived regional frontal lobe volumes (relative to maximal healthy brain size). gf correlated significantly and positively (.24 ≤ r ≤ .53) with the majority of frontal test scores. Some frontal test scores also exhibited shared variance after controlling for gf. Principal component analysis of test scores identified units of gf-common and gf-independent variance. The former was associated with variance in the left dorsolateral (DL) and anterior cingulate (AC) regions, and the latter with variance in the right DL and AC regions. Thus, we identify two biologically-meaningful components of variance in complex cognitive performance in older age and suggest that age-related changes to DL and AC have the greatest cognitive impact. PMID:25278641
Cox, Simon R; MacPherson, Sarah E; Ferguson, Karen J; Nissan, Jack; Royle, Natalie A; MacLullich, Alasdair M J; Wardlaw, Joanna M; Deary, Ian J
2014-09-01
Both general fluid intelligence ( g f ) and performance on some 'frontal tests' of cognition decline with age. Both types of ability are at least partially dependent on the integrity of the frontal lobes, which also deteriorate with age. Overlap between these two methods of assessing complex cognition in older age remains unclear. Such overlap could be investigated using inter-test correlations alone, as in previous studies, but this would be enhanced by ascertaining whether frontal test performance and g f share neurobiological variance. To this end, we examined relationships between g f and 6 frontal tests (Tower, Self-Ordered Pointing, Simon, Moral Dilemmas, Reversal Learning and Faux Pas tests) in 90 healthy males, aged ~ 73 years. We interpreted their correlational structure using principal component analysis, and in relation to MRI-derived regional frontal lobe volumes (relative to maximal healthy brain size). g f correlated significantly and positively (.24 ≤ r ≤ .53) with the majority of frontal test scores. Some frontal test scores also exhibited shared variance after controlling for g f . Principal component analysis of test scores identified units of g f -common and g f -independent variance. The former was associated with variance in the left dorsolateral (DL) and anterior cingulate (AC) regions, and the latter with variance in the right DL and AC regions. Thus, we identify two biologically-meaningful components of variance in complex cognitive performance in older age and suggest that age-related changes to DL and AC have the greatest cognitive impact.
Quantitative evaluation of variance in secondary dentition eruption among ethnic groups in Hawai'i.
Greer, Mark H K; Loo, Kevin J
2003-03-01
Though little scientific evidence existed to support the belief among dentists who treat Pacific Islander populations that many children of the region erupt secondary teeth earlier and at an eruption rate which exceeds Caucasian children. Based upon a data set created in Hawai'i during the 1998-1999 school year, of 26,097 public school children, the opportunity presented itself to examine for variance in eruption timing and sequence. Hawai'i is an ethnic diverse community, with a majority population comprised of Asians and Pacific Islanders. Children, 5 through 9 years of age, were examined for gender and ethnic variance. In the aggregate, at all ages, girls erupted teeth earlier than boys, however, while generally true among individual tooth types, that variance was not always statistically significant. By ethnic group, African Americans exhibited earlier eruption by contrast with Caucasians, however, Caucasian children caught up by nine years of age. Native Hawaiian, Samoan and Tongan children exhibited earlier and high rates of secondary dentition eruption than Caucasian or African American children. Children of various Asian cohorts did not exhibit significant variance by contrast with Caucasians. Based upon these findings, the authors recommend that dietary fluoride supplementation of Native Hawaiian, Samoan and Tongan children begin at birth rather than 6 months of age and that these children be targeted for pit & fissure sealants as early as five years of age.
Fragomeni, Breno de Oliveira; Misztal, Ignacy; Lourenco, Daniela Lino; Aguilar, Ignacio; Okimoto, Ronald; Muir, William M
2014-01-01
The purpose of this study was to determine if the set of genomic regions inferred as accounting for the majority of genetic variation in quantitative traits remain stable over multiple generations of selection. The data set contained phenotypes for five generations of broiler chicken for body weight, breast meat, and leg score. The population consisted of 294,632 animals over five generations and also included genotypes of 41,036 single nucleotide polymorphism (SNP) for 4,866 animals, after quality control. The SNP effects were calculated by a GWAS type analysis using single step genomic BLUP approach for generations 1-3, 2-4, 3-5, and 1-5. Variances were calculated for windows of 20 SNP. The top ten windows for each trait that explained the largest fraction of the genetic variance across generations were examined. Across generations, the top 10 windows explained more than 0.5% but less than 1% of the total variance. Also, the pattern of the windows was not consistent across generations. The windows that explained the greatest variance changed greatly among the combinations of generations, with a few exceptions. In many cases, a window identified as top for one combination, explained less than 0.1% for the other combinations. We conclude that identification of top SNP windows for a population may have little predictive power for genetic selection in the following generations for the traits here evaluated.
Evaluation of marginal fit of two all-ceramic copings with two finish lines
Subasi, Gulce; Ozturk, Nilgun; Inan, Ozgur; Bozogullari, Nalan
2012-01-01
Objectives: This in-vitro study investigated the marginal fit of two all-ceramic copings with 2 finish line designs. Methods: Forty machined stainless steel molar die models with two different margin designs (chamfer and rounded shoulder) were prepared. A total of 40 standardized copings were fabricated and divided into 4 groups (n=10 for each finish line-coping material). Coping materials tested were IPS e.max Press and Zirkonzahn; luting agent was Variolink II. Marginal fit was evaluated after cementation with a stereomicroscope (Leica MZ16). Two-way analysis of variance and Tukey-HSD test were performed to assess the influence of each finish line design and ceramic type on the marginal fit of 2 all-ceramic copings (α =.05). Results: Two-way analysis of variance revealed no statistically significant differences for marginal fit relative to finish lines (P=.362) and ceramic types (P=.065). Conclusion: Within the limitations of this study, both types of all-ceramic copings demonstrated that the mean marginal fit was considered acceptable for clinical application (⩽120 μm). PMID:22509119
Type D personality and the development of PTSD symptoms: a prospective study.
Rademaker, Arthur R; van Zuiden, Mirjam; Vermetten, Eric; Geuze, Elbert
2011-05-01
Psychological trauma and prolonged stress may cause mental disorders such as posttraumatic stress disorder (PTSD). Pretrauma personality is an important determinant of posttraumatic adjustment. Specifically, trait neuroticism has been identified as a risk factor for PTSD. Additionally, the combination of high negative affectivity or neuroticism with marked social inhibition or introversion, also called Type D personality (Denollet, 2000), may compose a risk factor for PTSD. There is no research available that examined pretrauma Type D personality in relation to PTSD. The present study examined the predictive validity of the Type D personality construct in a sample of Dutch soldiers. Data were collected prior to and 6 months after military deployment to Afghanistan. Separate multiple regression analyses were performed to examine the predictive validity of Type D personality. First, Type D personality was defined as the interaction between negative affect and social inhibition (Na × Si). In a second analysis, Type D was defined following cutoff criteria recommended by Denollet (2000). Results showed that negative affectivity was a significant predictor of PTSD symptoms. Social inhibition and the interaction Na × Si did not add to the amount of explained variance in postdeployment PTSD scores over the effects of childhood abuse, negative affectivity, and prior psychological symptoms. A second analysis showed that Type D personality (dichotomous) did not add to the amount of explained variance in postdeployment PTSD scores over the effects of childhood abuse, and prior psychological symptoms. Therefore, Type D personality appears to be of limited value to explain development of combat-related PTSD symptoms.
Using CNN Features to Better Understand What Makes Visual Artworks Special.
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond ( richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature.
Unpacking cultural factors in adaptation to type 2 diabetes mellitus.
Walsh, Michele E; Katz, Murray A; Sechrest, Lee
2002-01-01
Race and ethnicity are used as predictors of outcome in health services research. Often, however, race and ethnicity serve merely as proxies for the resources, values, beliefs, and behaviors (ie, ecology and culture) that are assumed to correlate with them. "Unpacking" proxy variables-directly measuring the variables believed to underlie them-would provide a more reliable and more interpretable way of looking at group differences. To assess the use of a measure of ecocultural domains that is correlated with ethnicity in accounting for variance in adherence, quality of life, clinical outcomes, and service utilization. A cross-sectional observational study. Twenty-six Hispanic and 29 non-Hispanic white VA primary care patients with type 2 diabetes mellitus. The independent variables were patient ethnicity and a summed score of ecocultural domains representing patient adaptation to illness. The outcomes were adherence to treatment, health-related quality of life, clinical indicators of disease management, and utilization of urgent health care services. Patient adaptation was correlated with ethnicity and accounted for more variance in all outcomes than did ethnicity. The unique variance accounted for by adaptation was small to moderate, whereas that accounted for by ethnicity was negligible. It is possible to identify and measure ecocultural domains that better account for variation in important health services outcomes for patients with type 2 diabetes than does ethnicity. Going beyond the study of ethnic differences alone and measuring the correlated factors that play a role in disease management can advance understanding of the phenomena involved in this variation and provide better direction for service design and delivery.
Using CNN Features to Better Understand What Makes Visual Artworks Special
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond (richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature. PMID:28588537
Träff, Ulf
2013-10-01
This study examined the relative contributions of general cognitive abilities and number abilities to word problem solving, calculation, and arithmetic fact retrieval in a sample of 134 children aged 10 to 13 years. The following tasks were administered: listening span, visual matrix span, verbal fluency, color naming, Raven's Progressive Matrices, enumeration, number line estimation, and digit comparison. Hierarchical multiple regressions demonstrated that number abilities provided an independent contribution to fact retrieval and word problem solving. General cognitive abilities contributed to problem solving and calculation. All three number tasks accounted for a similar amount of variance in fact retrieval, whereas only the number line estimation task contributed unique variance in word problem solving. Verbal fluency and Raven's matrices accounted for an equal amount of variance in problem solving and calculation. The current findings demonstrate, in accordance with Fuchs and colleagues' developmental model of mathematical learning (Developmental Psychology, 2010, Vol. 46, pp. 1731-1746), that both number abilities and general cognitive abilities underlie 10- to 13-year-olds' proficiency in problem solving, whereas only number abilities underlie arithmetic fact retrieval. Thus, the amount and type of cognitive contribution to arithmetic proficiency varies between the different aspects of arithmetic. Furthermore, how closely linked a specific aspect of arithmetic is to the whole number representation systems is not the only factor determining the amount and type of cognitive contribution in 10- to 13-year-olds. In addition, the mathematical complexity of the task appears to influence the amount and type of cognitive support. Copyright © 2013 Elsevier Inc. All rights reserved.
Rasmussen, Victoria; Turnell, Adrienne; Butow, Phyllis; Juraskova, Ilona; Kirsten, Laura; Wiener, Lori; Patenaude, Andrea; Hoekstra-Weebers, Josette; Grassi, Luigi
2016-01-01
Objectives Burnout is a significant problem among healthcare professionals working within the oncology setting. This study aimed to investigate predictors of emotional exhaustion (EE) and depersonalisation (DP) in psychosocial oncologists, through the application of the effort–reward imbalance (ERI) model with an additional focus on the role of meaningful work in the burnout process. Methods Psychosocial oncology clinicians (n = 417) in direct patient contact who were proficient in English were recruited from 10 international psychosocial oncology societies. Participants completed an online questionnaire, which included measures of demographic and work characteristics, EE and DP subscales of the Maslach Burnout Inventory-Human Services Survey, the Short Version ERI Questionnaire and the Work and Meaning Inventory. Results Higher effort and lower reward were both significantly associated with greater EE, although not DP. The interaction of higher effort and lower reward did not predict greater EE or DP. Overcommitment predicted both EE and DP but did not moderate the impact of effort and reward on burnout. Overall, the ERI model accounted for 33% of the variance in EE. Meaningful work significantly predicted both EE and DP but accounted for only 2% more of the variance in EE above and beyond the ERI model. Conclusions The ERI was only partially supported as a useful framework for investigating burnout in psychosocial oncology professionals. Meaningful work may be a viable extension of the ERI model. Burnout among health professionals may be reduced by interventions aimed at increasing self-efficacy and changes to the supportive work environment. PMID:26239424
Constraining Particle Variation in Lunar Regolith for Simulant Design
NASA Technical Reports Server (NTRS)
Schrader, Christian M.; Rickman, Doug; Stoeser, Douglas; Hoelzer, Hans
2008-01-01
Simulants are used by the lunar engineering community to develop and test technologies for In Situ Resource Utilization (ISRU), excavation and drilling, and for mitigation of hazards to machinery and human health. Working with the United States Geological Survey (USGS), other NASA centers, private industry and academia, Marshall Space Flight Center (MSFC) is leading NASA s lunar regolith simulant program. There are two main efforts: simulant production and simulant evaluation. This work requires a highly detailed understanding of regolith particle type, size, and shape distribution, and of bulk density. The project has developed Figure of Merit (FoM) algorithms to quantitatively compare these characteristics between two materials. The FoM can be used to compare two lunar regolith samples, regolith to simulant, or two parcels of simulant. In work presented here, we use the FoM algorithm to examine the variance of particle type in Apollo 16 highlands regolith core and surface samples. For this analysis we have used internally consistent particle type data for the 90-150 m fraction of Apollo core 64001/64002 from station 4, core 60009/60010 from station 10, and surface samples from various Apollo 16 stations. We calculate mean modal compositions for each core and for the group of surface samples and quantitatively compare samples of each group to its mean as a measurement of within-group variance; we also calculate an FoM for every sample against the mean composition of 64001/64002. This gives variation with depth at two locations and between Apollo 16 stations. Of the tested groups, core 60009/60010 has the highest internal variance with an average FoM score of 0.76 and core 64001/64002 has the lowest with an average FoM of 0.92. The surface samples have a low but intermediate internal variance with an average FoM of 0.79. FoM s calculated against the 64001/64002 mean reference composition range from 0.79-0.97 for 64001/64002, from 0.41-0.91 for 60009/60010, and from 0.54-0.93 for the surface samples. Six samples fall below 0.70, and they are also the least mature (i.e., have the lowest I(sub s)/FeO). Because agglutinates are the dominant particle type and the agglutinate population increases with sample maturity (I(sub s)/FeO), the maturity of the sample relative to the reference is a prime determinant of the particle type FoM score within these highland samples.
CMB distortion from circumgalactic gas
NASA Astrophysics Data System (ADS)
Singh, Priyanka; Nath, Biman B.; Majumdar, Subhabrata; Silk, Joseph
2015-04-01
We study the Sunyaev-Zel'dovich (SZ) distortion of the cosmic microwave background radiation from extensive circumgalactic gas (CGM) in massive galactic haloes. Recent observations have shown that galactic haloes contain a large amount of X-ray emitting gas at the virial temperature, as well as a significant amount of warm O VI absorbing gas. We consider the SZ distortion from the hot gas in those galactic haloes in which the gas cooling time is longer than the halo destruction time-scale. We show that the SZ distortion signal from the hot gas in these galactic haloes at redshifts z ≈ 1-8 can be significant at small angular scales (ℓ ˜ 104), and dominate over the signal from galaxy clusters. The estimated SZ signal for most massive galaxies (halo mass ≥1012.5 M⊙) is consistent with the marginal detection by Planck at these mass scales. We also consider the SZ effect from warm circumgalactic gas. The integrated Compton distortion from the warm O VI absorbing gas is estimated to be y ˜ 10-8, which could potentially be detected by experiments planned for the near future. Finally, we study the detectability of the SZ signal from circumgalactic gas in two types of surveys, a simple extension of the South Pole Telescope survey and a more futuristic cosmic-variance-limited survey. We find that these surveys can easily detect the kinetic Sunyaev-Zel'dovich signal from CGM. With the help of a Fisher matrix analysis, we find that it will be possible for these surveys to constrain the gas fraction in CGM, after marginalizing over cosmological parameters, to ≤33 per cent, in case of no redshift evolution of the gas fraction.
Kuffner, Ilsa B.; Roberts, Kelsey E.; Flannery, Jennifer A.; Morrison, Jennifer M.; Richey, Julie
2017-01-01
Massive corals provide a useful archive of environmental variability, but careful testing of geochemical proxies in corals is necessary to validate the relationship between each proxy and environmental parameter throughout the full range of conditions experienced by the recording organisms. Here we use samples from a coral-growth study to test the hypothesis that Sr/Ca in the coral Siderastrea siderea accurately records sea-surface temperature (SST) in the subtropics (Florida, USA) along 350 km of reef tract. We test calcification rate, measured via buoyant weight, and linear extension (LE) rate, estimated with Alizarin Red-S staining, as predictors of variance in the Sr/Ca records of 39 individual S. siderea corals grown at four outer-reef locations next to in-situ temperature loggers during two, year-long periods. We found that corals with calcification rates < 1.7 mg cm−2 d−1 or < 1.7 mm yr−1 LE returned spuriously high Sr/Ca values, leading to a cold-bias in Sr/Ca-based SST estimates. The threshold-type response curves suggest that extension rate can be used as a quality-control indicator during sample and drill-path selection when using long cores for SST paleoreconstruction. For our corals that passed this quality control step, the Sr/Ca-SST proxy performed well in estimating mean annual temperature across three sites spanning 350 km of the Florida reef tract. However, there was some evidence that extreme temperature stress in 2010 (cold snap) and 2011 (SST above coral-bleaching threshold) may have caused the corals not to record the temperature extremes. Known stress events could be avoided during modern calibrations of paleoproxies.
NASA Astrophysics Data System (ADS)
Ling, Yuye; Hendon, Christine P.
2016-02-01
Functional extensions to optical coherence tomography (OCT) provide useful imaging contrasts that are complementary to conventional OCT. Our goal is to characterize tissue types within the myocardial due to remodeling and therapy. High-speed imaging is necessary to extract mechanical properties and dynamics of fiber orientation changes in a beating heart. Functional extensions of OCT such as polarization sensitive and optical coherence elastography (OCE) require high phase stability of the system, which is a drawback of current mechanically tuned swept source OCT systems. Here we present a high-speed functional imaging platform, which includes an ultrahigh-phase-stable swept source equipped with KTN deflector from NTT-AT. The swept source does not require mechanical movements during the wavelength sweeping; it is electrically tuned. The inter-sweep phase variance of the system was measured to be less than 300 ps at a path length difference of ~2 mm. The axial resolution of the system is 20 µm and the -10 dB fall-off depth is about 3.2 mm. The sample arm has an 8 mmx8 mm field of view with a lateral resolution of approximately 18 µm. The sample arm uses a two-axis MEMS mirror, which is programmable and capable of scanning arbitrary patterns at a sampling rate of 50 kHz. Preliminary imaging results showed differences in polarization properties and image penetration in ablated and normal myocardium. In the future, we will conduct dynamic stretching experiments with strips of human myocardial tissue to characterize mechanical properties using OCE. With high speed imaging of 200 kHz and an all-fiber design, we will work towards catheter-based functional imaging.
Sexual network drivers of HIV and herpes simplex virus type 2 transmission
Omori, Ryosuke; Abu-Raddad, Laith J.
2017-01-01
Objectives: HIV and herpes simplex virus type 2 (HSV-2) infections are sexually transmitted and propagate in sexual networks. Using mathematical modeling, we aimed to quantify effects of key network statistics on infection transmission, and extent to which HSV-2 prevalence can be a proxy of HIV prevalence. Design/methods: An individual-based simulation model was constructed to describe sex partnering and infection transmission, and was parameterized with representative natural history, transmission, and sexual behavior data. Correlations were assessed on model outcomes (HIV/HSV-2 prevalences) and multiple linear regressions were conducted to estimate adjusted associations and effect sizes. Results: HIV prevalence was one-third or less of HSV-2 prevalence. HIV and HSV-2 prevalences were associated with a Spearman's rank correlation coefficient of 0.64 (95% confidence interval: 0.58–0.69). Collinearities among network statistics were detected, most notably between concurrency versus mean and variance of number of partners. Controlling for confounding, unmarried mean/variance of number of partners (or alternatively concurrency) were the strongest predictors of HIV prevalence. Meanwhile, unmarried/married mean/variance of number of partners (or alternatively concurrency), and clustering coefficient were the strongest predictors of HSV-2 prevalence. HSV-2 prevalence was a strong predictor of HIV prevalence by proxying effects of network statistics. Conclusion: Network statistics produced similar and differential effects on HIV/HSV-2 transmission, and explained most of the variation in HIV and HSV-2 prevalences. HIV prevalence reflected primarily mean and variance of number of partners, but HSV-2 prevalence was affected by a range of network statistics. HSV-2 prevalence (as a proxy) can forecast a population's HIV epidemic potential, thereby informing interventions. PMID:28514276
ERIC Educational Resources Information Center
Van Deun, K.; Groenen, P. J. F.; Heiser, W. J.; Busing, F. M. T. A.; Delbeke, L.
2005-01-01
In this paper, we reconsider the merits of unfolding solutions based on loss functions involving a normalization on the variance per subject. In the literature, solutions based on Stress-2 are often diagnosed to be degenerate in the majority of cases. Here, the focus lies on two frequently occurring types of degeneracies. The first type typically…
Mumford, Jeanette A.
2017-01-01
Even after thorough preprocessing and a careful time series analysis of functional magnetic resonance imaging (fMRI) data, artifact and other issues can lead to violations of the assumption that the variance is constant across subjects in the group level model. This is especially concerning when modeling a continuous covariate at the group level, as the slope is easily biased by outliers. Various models have been proposed to deal with outliers including models that use the first level variance or that use the group level residual magnitude to differentially weight subjects. The most typically used robust regression, implementing a robust estimator of the regression slope, has been previously studied in the context of fMRI studies and was found to perform well in some scenarios, but a loss of Type I error control can occur for some outlier settings. A second type of robust regression using a heteroscedastic autocorrelation consistent (HAC) estimator, which produces robust slope and variance estimates has been shown to perform well, with better Type I error control, but with large sample sizes (500–1000 subjects). The Type I error control with smaller sample sizes has not been studied in this model and has not been compared to other modeling approaches that handle outliers such as FSL’s Flame 1 and FSL’s outlier de-weighting. Focusing on group level inference with a continuous covariate over a range of sample sizes and degree of heteroscedasticity, which can be driven either by the within- or between-subject variability, both styles of robust regression are compared to ordinary least squares (OLS), FSL’s Flame 1, Flame 1 with outlier de-weighting algorithm and Kendall’s Tau. Additionally, subject omission using the Cook’s Distance measure with OLS and nonparametric inference with the OLS statistic are studied. Pros and cons of these models as well as general strategies for detecting outliers in data and taking precaution to avoid inflated Type I error rates are discussed. PMID:28030782
Solution Methods for Certain Evolution Equations
NASA Astrophysics Data System (ADS)
Vega-Guzman, Jose Manuel
Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.
Sueyoshi, Ted; Nakahata, Akihiro; Emoto, Gen; Yuasa, Tomoki
2017-01-01
Background: Isokinetic strength and hop tests are commonly used to assess athletes’ readiness to return to sport after knee surgery. Purpose/Hypothesis: The purpose of this study was to investigate the results of single-leg hop and isokinetic knee strength testing in athletes who underwent anterior cruciate ligament reconstruction (ACLR) upon returning to sport participation as well as to study the correlation between these 2 test batteries. The secondary purpose was to compare the test results by graft type (patellar tendon or hamstring). It was hypothesized that there would be no statistically significant limb difference in either isokinetic knee strength or single-leg hop tests, that there would be a moderate to strong correlation between the 2 test batteries, and that there would be no significant difference between graft types. Study Design: Cross-sectional study; Level of evidence, 3. Methods: Twenty-nine high school and collegiate athletes who underwent ACLR participated in this study. At the time of return to full sport participation, a series of hop tests and knee extension/flexion isokinetic strength measurements were conducted. The results were analyzed using analysis of variance and Pearson correlation (r). Results: The timed 6-m hop test was the only hop test that showed a significant difference between the involved and uninvolved limbs (2.3 and 2.2 seconds, respectively; P = .02). A significant difference between limbs in knee strength was found for flexion peak torque/body weight at 180 deg/s (P = .03), flexion total work/body weight at 180 deg/s (P = .04), and flexion peak torque/body weight at 300 deg/s (P = .03). The strongest correlation between the hop tests and knee strength was found between the total distance of the hop tests and flexion total work/body weight at 300 deg/s (r = 0.69) and between the timed 6-m hop test and flexion peak torque/body weight at 300 deg/s (r = –0.54). There was no statistically significant difference in hop test performance or isokinetic knee strength between graft types. Conclusion: The single-leg hop tests and isokinetic strength measurements were both useful for a bilateral comparison of knee functional performance and strength. Knee flexion strength deficits and flexion-to-extension ratios seemed to be correlated with single-leg hop test performance. There was no difference in postoperative hop test performance or knee strength according to graft type. PMID:29164167
Unstable bodyweight and incident type 2 diabetes mellitus: A meta-analysis.
Kodama, Satoru; Fujihara, Kazuya; Ishiguro, Hajime; Horikawa, Chika; Ohara, Nobumasa; Yachi, Yoko; Tanaka, Shiro; Shimano, Hitoshi; Kato, Kiminori; Hanyu, Osamu; Sone, Hirohito
2017-07-01
The present meta-analysis aimed to clarify the association of unstable bodyweight with the risk of type 2 diabetes mellitus, an association that has been controversial among longitudinal studies. An electronic literature search using EMBASE and MEDLINE was followed up to 31 August 2016. The relative risks (RRs) of type 2 diabetes mellitus in individuals with unstable bodyweight were pooled using the inverse variance method. Eight studies were eligible for the meta-analysis. The median duration of measurements of weight change and follow-up years for ascertaining type 2 diabetes mellitus were 13.5 and 9.4 years, respectively. The pooled RR for the least vs most stable category was 1.33 (95% confidence interval 1.12-1.57). Between-study heterogeneity was statistically significant (P = 0.048). Whether type 2 diabetes mellitus was ascertained by blood testing explained 66.0% of the variance in the logarithm of RR (P = 0.02). In three studies in which blood testing was carried out, type 2 diabetes mellitus risk was not significant (RR 1.06, 95% confidence interval 0.91-1.25). Furthermore, publication bias that inflated type 2 diabetes mellitus risk was statistically detected by Egger's test (P = 0.09). Unstable bodyweight might be modestly associated with the elevated risk of type 2 diabetes mellitus; although serious biases, such as diagnostic suspicion bias and publication bias, made it difficult to assess this association. © 2017 The Authors. Journal of Diabetes Investigation published by Asian Association for the Study of Diabetes (AASD) and John Wiley & Sons Australia, Ltd.
Hierarchical atom type definitions and extensible all-atom force fields.
Jin, Zhao; Yang, Chunwei; Cao, Fenglei; Li, Feng; Jing, Zhifeng; Chen, Long; Shen, Zhe; Xin, Liang; Tong, Sijia; Sun, Huai
2016-03-15
The extensibility of force field is a key to solve the missing parameter problem commonly found in force field applications. The extensibility of conventional force fields is traditionally managed in the parameterization procedure, which becomes impractical as the coverage of the force field increases above a threshold. A hierarchical atom-type definition (HAD) scheme is proposed to make extensible atom type definitions, which ensures that the force field developed based on the definitions are extensible. To demonstrate how HAD works and to prepare a foundation for future developments, two general force fields based on AMBER and DFF functional forms are parameterized for common organic molecules. The force field parameters are derived from the same set of quantum mechanical data and experimental liquid data using an automated parameterization tool, and validated by calculating molecular and liquid properties. The hydration free energies are calculated successfully by introducing a polarization scaling factor to the dispersion term between the solvent and solute molecules. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Storz, J F; Bhat, H R; Kunz, T H
2001-06-01
Variance in reproductive success is a primary determinant of genetically effective population size (Ne), and thus has important implications for the role of genetic drift in the evolutionary dynamics of animal taxa characterized by polygynous mating systems. Here we report the results of a study designed to test the hypothesis that polygynous mating results in significantly reduced Ne in an age-structured population. This hypothesis was tested in a natural population of a harem-forming fruit bat, Cynopterus sphinx (Chiroptera: Pteropodidae), in western India. The influence of the mating system on the ratio of variance Ne to adult census number (N) was assessed using a mathematical model designed for age-structured populations that incorporated demographic and genetic data. Male mating success was assessed by means of direct and indirect paternity analysis using 10-locus microsatellite genotypes of adults and progeny from two consecutive breeding periods (n = 431 individually marked bats). Combined results from both analyses were used to infer the effective number of male parents in each breeding period. The relative proportion of successfully reproducing males and the size distribution of paternal sibships comprising each offspring cohort revealed an extremely high within-season variance in male mating success (up to 9.2 times higher than Poisson expectation). The resultant estimate of Ne/N for the C. sphinx study population was 0.42. As a result of polygynous mating, the predicted rate of drift (1/2Ne per generation) was 17.6% higher than expected from a Poisson distribution of male mating success. However, the estimated Ne/N was well within the 0.25-0.75 range expected for age-structured populations under normal demographic conditions. The life-history schedule of C. sphinx is characterized by a disproportionately short sexual maturation period scaled to adult life span. Consequently, the influence of polygynous mating on Ne/N is mitigated by the extensive overlap of generations. In C. sphinx, turnover of breeding males between seasons ensures a broader sampling of the adult male gamete pool than expected from the variance in mating success within a single breeding period.
Quality choice in a health care market: a mixed duopoly approach.
Sanjo, Yasuo
2009-05-01
We investigate a health care market with uncertainty in a mixed duopoly, where a partially privatized public hospital competes against a private hospital in terms of quality choice. We use a simple Hotelling-type spatial competition model by incorporating mean-variance analysis and the framework of partial privatization. We show how the variance in the quality perceived by patients affects the true quality of medical care provided by hospitals. In addition, we show that a case exists in which the quality of the partially privatized hospital becomes higher than that of the private hospital when the patient's preference for quality is relatively high.
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.
Calculating stage duration statistics in multistage diseases.
Komarova, Natalia L; Thalhauser, Craig J
2011-01-01
Many human diseases are characterized by multiple stages of progression. While the typical sequence of disease progression can be identified, there may be large individual variations among patients. Identifying mean stage durations and their variations is critical for statistical hypothesis testing needed to determine if treatment is having a significant effect on the progression, or if a new therapy is showing a delay of progression through a multistage disease. In this paper we focus on two methods for extracting stage duration statistics from longitudinal datasets: an extension of the linear regression technique, and a counting algorithm. Both are non-iterative, non-parametric and computationally cheap methods, which makes them invaluable tools for studying the epidemiology of diseases, with a goal of identifying different patterns of progression by using bioinformatics methodologies. Here we show that the regression method performs well for calculating the mean stage durations under a wide variety of assumptions, however, its generalization to variance calculations fails under realistic assumptions about the data collection procedure. On the other hand, the counting method yields reliable estimations for both means and variances of stage durations. Applications to Alzheimer disease progression are discussed.
Connolly, Eric J; Schwartz, Joseph A; Nedelec, Joseph L; Beaver, Kevin M; Barnes, J C
2015-07-01
An extensive line of research has identified delinquent peer association as a salient environmental risk factor for delinquency, especially during adolescence. While previous research has found moderate-to-strong associations between exposure to delinquent peers and a variety of delinquent behaviors, comparatively less scholarship has focused on the genetic architecture of this association over the course of adolescence. Using a subsample of kinship pairs (N = 2379; 52% female) from the National Longitudinal Survey of Youth-Child and Young Adult Supplement (CNLSY), the present study examined the extent to which correlated individual differences in starting levels and developmental growth in delinquent peer pressure and self-reported delinquency were explained by additive genetic and environmental influences. Results from a series of biometric growth models revealed that 37% of the variance in correlated growth between delinquent peer pressure and self-reported delinquency was explained by additive genetic effects, while nonshared environmental effects accounted for the remaining 63% of the variance. Implications of these findings for interpreting the nexus between peer effects and adolescent delinquency are discussed.
Hierarchical multivariate covariance analysis of metabolic connectivity
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-01-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI). PMID:25294129
Radiation Transport in Random Media With Large Fluctuations
NASA Astrophysics Data System (ADS)
Olson, Aaron; Prinja, Anil; Franke, Brian
2017-09-01
Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.
Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Hunyadi, Borbála; Ceulemans, Eva
2018-01-15
Detecting abrupt correlation changes in multivariate time series is crucial in many application fields such as signal processing, functional neuroimaging, climate studies, and financial analysis. To detect such changes, several promising correlation change tests exist, but they may suffer from severe loss of power when there is actually more than one change point underlying the data. To deal with this drawback, we propose a permutation based significance test for Kernel Change Point (KCP) detection on the running correlations. Given a requested number of change points K, KCP divides the time series into K + 1 phases by minimizing the within-phase variance. The new permutation test looks at how the average within-phase variance decreases when K increases and compares this to the results for permuted data. The results of an extensive simulation study and applications to several real data sets show that, depending on the setting, the new test performs either at par or better than the state-of-the art significance tests for detecting the presence of correlation changes, implying that its use can be generally recommended.
Ming, L; Yi, L; Sa, R; Wang, Z X; Wang, Z; Ji, R
2017-04-01
The Bactrian camel includes various domestic (Camelus bactrianus) and wild (Camelus ferus) breeds that are important for transportation and for their nutritional value. However, there is a lack of extensive information on their genetic diversity and phylogeographic structure. Here, we studied these parameters by examining an 809-bp mtDNA fragment from 113 individuals, representing 11 domestic breeds, one wild breed and two hybrid individuals. We found 15 different haplotypes, and the phylogenetic analysis suggests that domestic and wild Bactrian camels have two distinct lineages. The analysis of molecular variance placed most of the genetic variance (90.14%, P < 0.01) between wild and domestic camel lineages, suggesting that domestic and wild Bactrian camel do not have the same maternal origin. The analysis of domestic Bactrian camels from different geographical locations found there was no significant genetic divergence in China, Russia and Mongolia. This suggests a strong gene flow due to wide movement of domestic Bactrian camels. © 2016 The Authors. Animal Genetics published by John Wiley & Sons Ltd on behalf of Stichting International Foundation for Animal Genetics.
Crack layer morphology and toughness characterization in steels
NASA Technical Reports Server (NTRS)
Chudnovsky, A.; Bessendorf, M.
1983-01-01
Both the macro studies of crack layer propagation are presented. The crack extension resistance parameter R sub 1 based on the morphological study of microdefects is introduced. Experimental study of the history dependent nature of G sub c supports the representation of G sub c as a product of specific enthalpy of damage (material constant) and R sub 1. The latter accounts for the history dependence. The observation of nonmonotonic crack growth under monotonic changes of J as well as statistical features of the critical energy release rate (variance of G sub c) indicate the validity of the proposed damage characterization.
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Briat, Corentin; Gupta, Ankit; Khammash, Mustafa
2018-06-01
The ability of a cell to regulate and adapt its internal state in response to unpredictable environmental changes is called homeostasis and this ability is crucial for the cell's survival and proper functioning. Understanding how cells can achieve homeostasis, despite the intrinsic noise or randomness in their dynamics, is fundamentally important for both systems and synthetic biology. In this context, a significant development is the proposed antithetic integral feedback (AIF) motif, which is found in natural systems, and is known to ensure robust perfect adaptation for the mean dynamics of a given molecular species involved in a complex stochastic biomolecular reaction network. From the standpoint of applications, one drawback of this motif is that it often leads to an increased cell-to-cell heterogeneity or variance when compared to a constitutive (i.e. open-loop) control strategy. Our goal in this paper is to show that this performance deterioration can be countered by combining the AIF motif and a negative feedback strategy. Using a tailored moment closure method, we derive approximate expressions for the stationary variance for the controlled network that demonstrate that increasing the strength of the negative feedback can indeed decrease the variance, sometimes even below its constitutive level. Numerical results verify the accuracy of these results and we illustrate them by considering three biomolecular networks with two types of negative feedback strategies. Our computational analysis indicates that there is a trade-off between the speed of the settling-time of the mean trajectories and the stationary variance of the controlled species; i.e. smaller variance is associated with larger settling-time. © 2018 The Author(s).
Biochemical phenotypes to discriminate microbial subpopulations and improve outbreak detection.
Galar, Alicia; Kulldorff, Martin; Rudnick, Wallis; O'Brien, Thomas F; Stelling, John
2013-01-01
Clinical microbiology laboratories worldwide constitute an invaluable resource for monitoring emerging threats and the spread of antimicrobial resistance. We studied the growing number of biochemical tests routinely performed on clinical isolates to explore their value as epidemiological markers. Microbiology laboratory results from January 2009 through December 2011 from a 793-bed hospital stored in WHONET were examined. Variables included patient location, collection date, organism, and 47 biochemical and 17 antimicrobial susceptibility test results reported by Vitek 2. To identify biochemical tests that were particularly valuable (stable with repeat testing, but good variability across the species) or problematic (inconsistent results with repeat testing), three types of variance analyses were performed on isolates of K. pneumonia: descriptive analysis of discordant biochemical results in same-day isolates, an average within-patient variance index, and generalized linear mixed model variance component analysis. 4,200 isolates of K. pneumoniae were identified from 2,485 patients, 32% of whom had multiple isolates. The first two variance analyses highlighted SUCT, TyrA, GlyA, and GGT as "nuisance" biochemicals for which discordant within-patient test results impacted a high proportion of patient results, while dTAG had relatively good within-patient stability with good heterogeneity across the species. Variance component analyses confirmed the relative stability of dTAG, and identified additional biochemicals such as PHOS with a large between patient to within patient variance ratio. A reduced subset of biochemicals improved the robustness of strain definition for carbapenem-resistant K. pneumoniae. Surveillance analyses suggest that the reduced biochemical profile could improve the timeliness and specificity of outbreak detection algorithms. The statistical approaches explored can improve the robust recognition of microbial subpopulations with routinely available biochemical test results, of value in the timely detection of outbreak clones and evolutionarily important genetic events.
Comparison of and conversion between different implementations of the FORTRAN programming language
NASA Technical Reports Server (NTRS)
Treinish, L.
1980-01-01
A guideline for computer programmers who may need to exchange FORTRAN programs between several computers is presented. The characteristics of the FORTRAN language available on three different types of computers are outlined, and procedures and other considerations for the transfer of programs from one type of FORTRAN to another are discussed. In addition, the variance of these different FORTRAN's from the FORTRAN 77 standard are discussed.
Effect of train type on annoyance and acoustic features of the rolling noise.
Kasess, Christian H; Noll, Anton; Majdak, Piotr; Waubke, Holger
2013-08-01
This study investigated the annoyance associated with the rolling noise of different railway stock. Passbys of nine train types (passenger and freight trains) equipped with different braking systems were recorded. Acoustic features showed a clear distinction of the braking system with the A-weighted energy equivalent sound level (LAeq) showing a difference in the range of 10 dB between cast-iron braked trains and trains with disk or K-block brakes. Further, annoyance was evaluated in a psychoacoustic experiment where listeners rated the relative annoyance of the rolling noise for the different train types. Stimuli with and without the original LAeq differences were tested. For the original LAeq differences, the braking system significantly affected the annoyance with cast-iron brakes being most annoying, most likely as a consequence of the increased wheel roughness causing an increased LAeq. Contribution of the acoustic features to the annoyance was investigated revealing that the LAeq explained up to 94% of the variance. For the stimuli without differences in the LAeq, cast-iron braked train types were significantly less annoying and the spectral features explained up to 60% of the variance in the annoyance. The effect of these spectral features on the annoyance of the rolling noise is discussed.
Adjusting stream-sediment geochemical maps in the Austrian Bohemian Massif by analysis of variance
Davis, J.C.; Hausberger, G.; Schermann, O.; Bohling, G.
1995-01-01
The Austrian portion of the Bohemian Massif is a Precambrian terrane composed mostly of highly metamorphosed rocks intruded by a series of granitoids that are petrographically similar. Rocks are exposed poorly and the subtle variations in rock type are difficult to map in the field. A detailed geochemical survey of stream sediments in this region has been conducted and included as part of the Geochemischer Atlas der Republik O??sterreich, and the variations in stream sediment composition may help refine the geological interpretation. In an earlier study, multivariate analysis of variance (MANOVA) was applied to the stream-sediment data in order to minimize unwanted sampling variation and emphasize relationships between stream sediments and rock types in sample catchment areas. The estimated coefficients were used successfully to correct for the sampling effects throughout most of the region, but also introduced an overcorrection in some areas that seems to result from consistent but subtle differences in composition of specific rock types. By expanding the model to include an additional factor reflecting the presence of a major tectonic unit, the Rohrbach block, the overcorrection is removed. This iterative process simultaneously refines both the geochemical map by removing extraneous variation and the geological map by suggesting a more detailed classification of rock types. ?? 1995 International Association for Mathematical Geology.
77 FR 25230 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-27
... Number: 1545-1146. Type of Review: Extension without change of a currently approved collection. Title: TD... circumstances when the taxpayer transfer property in certain non- recognition transactions. The information is... other for-profits. Estimated Total Burden Hours: 70. OMB Number: 1545-1959. Type of Review: Extension...
Rapid Training of Information Extraction with Local and Global Data Views
2012-05-01
relation type extension system based on active learning a relation type extension system based on semi-supervised learning, and a crossdomain...bootstrapping system for domain adaptive named entity extraction. The active learning procedure adopts features extracted at the sentence level as the local
Cool, Geneviève; Lebel, Alexandre; Sadiq, Rehan; Rodriguez, Manuel J
2015-12-01
The regional variability of the probability of occurrence of high total trihalomethane (TTHM) levels was assessed using multilevel logistic regression models that incorporate environmental and infrastructure characteristics. The models were structured in a three-level hierarchical configuration: samples (first level), drinking water utilities (DWUs, second level) and natural regions, an ecological hierarchical division from the Quebec ecological framework of reference (third level). They considered six independent variables: precipitation, temperature, source type, seasons, treatment type and pH. The average probability of TTHM concentrations exceeding the targeted threshold was 18.1%. The probability was influenced by seasons, treatment type, precipitations and temperature. The variance at all levels was significant, showing that the probability of TTHM concentrations exceeding the threshold is most likely to be similar if located within the same DWU and within the same natural region. However, most of the variance initially attributed to natural regions was explained by treatment types and clarified by spatial aggregation on treatment types. Nevertheless, even after controlling for treatment type, there was still significant regional variability of the probability of TTHM concentrations exceeding the threshold. Regional variability was particularly important for DWUs using chlorination alone since they lack the appropriate treatment required to reduce the amount of natural organic matter (NOM) in source water prior to disinfection. Results presented herein could be of interest to authorities in identifying regions with specific needs regarding drinking water quality and for epidemiological studies identifying geographical variations in population exposure to disinfection by-products (DBPs).
Human Factors Considerations in the Design of Systems of Computer Managed Instruction
ERIC Educational Resources Information Center
Bozeman, William C.
1978-01-01
The findings of this study indicate that a significant portion of the wide variance in the success of the implementation of the Wisconsin System for Instructional Management is attributable to the psychological type of the user. (Author/IRT)
Bioreactor Landfills State-Of-The Practice Review
Recently approved regulations by the U.S. Environmental Protection Agency (EPA) give approved states the power to grant landfill variance under Subtitle D by allowing these landfills to introduce bulk liquids into the solid waste mass. These types of landfills are called bioreac...
Criterion Predictability: Identifying Differences Between [r-squares
ERIC Educational Resources Information Center
Malgady, Robert G.
1976-01-01
An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)
The Relationship Between Maximum Isometric Strength and Ball Velocity in the Tennis Serve.
Baiget, Ernest; Corbi, Francisco; Fuentes, Juan Pedro; Fernández-Fernández, Jaime
2016-12-01
The aims of this study were to analyze the relationship between maximum isometric strength levels in different upper and lower limb joints and serve velocity in competitive tennis players as well as to develop a prediction model based on this information. Twelve male competitive tennis players (mean ± SD; age: 17.2 ± 1.0 years; body height: 180.1 ± 6.2 cm; body mass: 71.9 ± 5.6 kg) were tested using maximum isometric strength levels (i.e., wrist, elbow and shoulder flexion and extension; leg and back extension; shoulder external and internal rotation). Serve velocity was measured using a radar gun. Results showed a strong positive relationship between serve velocity and shoulder internal rotation (r = 0.67; p < 0.05). Low to moderate correlations were also found between serve velocity and wrist, elbow and shoulder flexion - extension, leg and back extension and shoulder external rotation (r = 0.36 - 0.53; p = 0.377 - 0.054). Bivariate and multivariate models for predicting serve velocity were developed, with shoulder flexion and internal rotation explaining 55% of the variance in serve velocity (r = 0.74; p < 0.001). The maximum isometric strength level in shoulder internal rotation was strongly related to serve velocity, and a large part of the variability in serve velocity was explained by the maximum isometric strength levels in shoulder internal rotation and shoulder flexion.
Boudreau, François; Godin, Gaston
2014-12-01
Most people with type 2 diabetes do not engage in regular leisure-time physical activity. The theory of planned behavior and moral norm construct can enhance our understanding of physical activity intention and behavior among this population. This study aims to identify the determinants of both intention and behavior to participate in regular leisure-time physical activity among individuals with type 2 diabetes who not meet Canada's physical activity guidelines. By using secondary data analysis of a randomized computer-tailored print-based intervention, participants (n = 200) from the province of Quebec (Canada) completed and returned a baseline questionnaire measuring their attitude, perceived behavioral control, and moral norm. One month later, they self-reported their level of leisure-time physical activity. A hierarchical regression equation showed that attitude (beta = 0.10, P < 0.05), perceived behavioral control (beta = 0.37, P < 0.001), and moral norm (beta = 0.45, P < 0.001) were significant determinants of intention, with the final model explaining 63% of the variance. In terms of behavioral prediction, intention (beta = 0.34, P < 0.001) and perceived behavioral control (beta = 0.16, P < 0.05) added 17% to the variance, after controlling the effects of the experimental condition (R (2) = 0.04, P < 0.05) and past participation in leisure-time physical activity (R (2) = 0.22, P < 0.001). The final model explained 43% of the behavioral variance. Finally, the bootstrapping procedure indicated that the influence of moral norm on behavior was mediated by intention and perceived behavioral control. The determinants investigated offered an excellent starting point for designing appropriate counseling messages to promote leisure-time physical activity among individuals with type 2 diabetes.
DIFFERENCES IN MENTAL HEALTH AND SEXUAL OUTCOMES BASED ON TYPE OF NONCONSENUAL SEXUAL PENETRATION
Pinsky, Hanna T.; Shepard, Molly E.; Bird, Elizabeth R.; Gilmore, Amanda K.; Norris, Jeanette; Davis, Kelly Cue; George, William H.
2016-01-01
Little is known based on the stratification and localization of penetration type of rape: oral, vaginal, and/or anal. The current study examined associations between type of rape and mental and sexual health symptoms in 865 community women. All penetration types were positively associated with negative mental and sexual health symptoms. Oral and/or anal rape accounted for additional variance in anxiety, depression, some trauma-related symptoms, and dysfunctional sexual behavior than the association with vaginal rape alone. Findings suggest that penetration type can be an important facet of a rape experience and may be useful to assess in research and clinical settings. PMID:27486127
High-efficiency induction motor drives using type-2 fuzzy logic
NASA Astrophysics Data System (ADS)
Khemis, A.; Benlaloui, I.; Drid, S.; Chrifi-Alaoui, L.; Khamari, D.; Menacer, A.
2018-03-01
In this work we propose to develop an algorithm for improving the efficiency of an induction motor using type-2 fuzzy logic. Vector control is used to control this motor due to the high performances of this strategy. The type-2 fuzzy logic regulators are developed to obtain the optimal rotor flux for each torque load by minimizing the copper losses. We have compared the performances of our fuzzy type-2 algorithm with the type-1 fuzzy one proposed in the literature. The proposed algorithm is tested with success on the dSPACE DS1104 system even if there is parameters variance.
Terblanche, John S.; Chown, Steven L.
2006-01-01
Summary Recent reviews of the adaptive hypotheses for animal responses to acclimation have highlighted the importance of distinguishing between developmental and adult (non-developmental) phenotypic plasticity. However, little work has been undertaken separating the effects of developmental plasticity from adult acclimation in physiological traits. Therefore, we investigate the relative contributions of these two distinct forms of plasticity to the environmental physiology of adult tsetse flies by exposing developing pupae or adult flies to different temperatures and comparing their responses. We also exposed flies to different temperatures during development and re-exposed them as adults to the same temperatures to investigate possible cumulative effects. Critical thermal maxima were relatively inflexible in response to acclimation temperatures (21, 25, 29 °C) with plasticity type accounting for the majority of the variation (49-67 %, nested ANOVA). By contrast, acclimation had a larger effect on critical thermal minima with treatment temperature accounting for most of the variance (84-92 %). Surprisingly little of the variance in desiccation rate could be explained by plasticity type (30-47 %). The only significant effect of acclimation on standard (resting) metabolic rate of adult flies occurred in response to 21 °C, resulting in treatment temperature, rather than plasticity type, accounting for the majority of the variance (30-76 %). This study demonstrates that the stage at which acclimation takes place has significant, though often different effects on several adult physiological traits in G. pallidipes, and therefore that it is not only important to consider the form of plasticity but also the direction of the response and its significance from a life-history perspective. PMID:16513933
Appelbaum, Liat; Sosna, Jacob; Pearson, Robert; Perez, Sarah; Nissenbaum, Yizhak; Mertyna, Pawel; Libson, Eugene; Goldberg, S Nahum
2010-02-01
To prospectively optimize multistep algorithms for largest available multitined radiofrequency (RF) electrode system in ex vivo and in vivo tissues, to determine best energy parameters to achieve large predictable target sizes of coagulation, and to compare these algorithms with manufacturer's recommended algorithms. Institutional animal care and use committee approval was obtained for the in vivo portion of this study. Ablation (n = 473) was performed in ex vivo bovine liver; final tine extension was 5-7 cm. Variables in stepped-deployment RF algorithm were interrogated and included initial current ramping to 105 degrees C (1 degrees C/0.5-5.0 sec), the number of sequential tine extensions (2-7 cm), and duration of application (4-12 minutes) for final two to three tine extensions. Optimal parameters to achieve 5-7 cm of coagulation were compared with recommended algorithms. Optimal settings for 5- and 6-cm final tine extensions were confirmed in in vivo perfused bovine liver (n = 14). Multivariate analysis of variance and/or paired t tests were used. Mean RF ablation zones of 5.1 cm +/- 0.2 (standard deviation), 6.3 cm +/- 0.4, and 7 cm +/- 0.3 were achieved with 5-, 6-, and 7-cm final tine extensions in a mean of 19.5 min +/- 0.5, 27.9 min +/- 6, and 37.1 min +/- 2.3, respectively, at optimal settings. With these algorithms, size of ablation at 6- and 7-cm tine extension significantly increased from mean of 5.4 cm +/- 0.4 and 6.1 cm +/- 0.6 (manufacturer's algorithms) (P <.05, both comparisons); two recommended tine extensions were eliminated. In vivo confirmation produced mean diameter in specified time: 5.5 cm +/- 0.4 in 18.5 min +/- 0.5 (5-cm extensions) and 5.7 cm +/- 0.2 in 21.2 min +/- 0.6 (6-cm extensions). Large zones of coagulation of 5-7 cm can be created with optimized RF algorithms that help reduce number of tine extensions compared with manufacturer's recommendations. Such algorithms are likely to facilitate the utility of these devices for RF ablation of focal tumors in clinical practice. (c) RSNA, 2010.
Is academic buoyancy anything more than adaptive coping?
Putwain, David W; Connors, Liz; Symes, Wendy; Douglas-Osborn, Erica
2012-05-01
Academic buoyancy refers to a positive, constructive, and adaptive response to the types of challenges and setbacks experienced in a typical and everyday academic setting. In this project we examined whether academic buoyancy explained any additional variance in test anxiety over and above that explained by coping. Two hundred and ninety-eight students in their final two years of compulsory schooling completed self-report measures of academic buoyancy, coping, and test anxiety. Results suggested that buoyancy was inversely related to test anxiety and unrelated to coping. With the exception of test-irrelevant thoughts, test anxiety was positively related to avoidance coping and social support. Test-irrelevant thoughts were inversely related to task focus, unrelated to social support, and positively related to avoidance. A hierarchical regression analysis showed that academic buoyancy explained a significant additional proportion of variance in test anxiety when the variance for coping had already been accounted for. These findings suggest that academic buoyancy can be considered as a distinct construct from that of adaptive coping.
Olthuis, Janine V; Watt, Margo C; Stewart, Sherry H
2014-03-01
Anxiety sensitivity (AS) has been implicated in the development and maintenance of a range of mental health problems. The development of the Anxiety Sensitivity Index - 3, a psychometrically sound index of AS, has provided the opportunity to better understand how the lower-order factors of AS - physical, psychological, and social concerns - are associated with unique forms of psychopathology. The present study investigated these associations among 85 treatment-seeking adults with high AS. Participants completed measures of AS, anxiety, and depression. Multiple regression analyses controlling for other emotional disorder symptoms revealed unique associations between AS subscales and certain types of psychopathology. Only physical concerns predicted unique variance in panic, only cognitive concerns predicted unique variance in depressive symptoms, and social anxiety was predicted by only social concerns. Findings emphasize the importance of considering the multidimensional nature of AS in understanding its role in anxiety and depression and their treatment. Copyright © 2013 Elsevier Ltd. All rights reserved.
Influence of exercise on visceral pain: an explorative study in healthy volunteers
van Weerdenburg, Laura JGM; Brock, Christina; Drewes, Asbjørn Mohr; van Goor, Harry; de Vries, Marjan; Wilder-Smith, Oliver HG
2017-01-01
Background and objectives Contradictory results have been found about the effect of different exercise modalities on pain. The aim of this study was to investigate the early effects of aerobic and isometric exercise on different types of experimental pain, including visceral pain, compared to an active control condition. Methods Fifteen healthy subjects (6 women, mean [standard deviation] age 25 [6.5] years) completed 3 interventions consisting of 20 minutes of aerobic cycling, 12 minutes of isometric knee extension and a deep breathing procedure as active control. At baseline and after each intervention, psychophysical tests were performed, including electrical stimulation of the esophagus, pressure pain thresholds and the cold pressor test as a measure for conditioned pain modulation. Participants completed the Medical Outcome Study Short-Form 36 and State-Trait Anxiety Inventory prior to the experiments. Data were analyzed using two-way repeated measures analysis of variance. Results No significant differences were found for the psychophysical tests after the interventions, compared to baseline pain tests and the control condition. Conclusion No hypoalgesic effect of aerobic and isometric exercise was found. The evidence for exercise-induced hypoalgesia appears to be not as consistent as initially thought, and caution is recommended when interpreting the effects of exercise on pain. PMID:28096689
Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai
2017-10-01
Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.
Prediction of Imagined Single-Joint Movements in a Person with High Level Tetraplegia
Simeral, John D.; Donoghue, John P.; Hochberg, Leigh R.; Kirsch, Robert F.
2013-01-01
Cortical neuroprostheses for movement restoration require developing models for relating neural activity to desired movement. Previous studies have focused on correlating single-unit activities (SUA) in primary motor cortex to volitional arm movements in able-bodied primates. The extent of the cortical information relevant to arm movements remaining in severely paralyzed individuals is largely unknown. We record intracortical signals using a microelectrode array chronically implanted in the precentral gyrus of a person with tetraplegia, and estimate positions of imagined single-joint arm movements. Using visually guided motor imagery, the participant imagined performing eight distinct single-joint arm movements while SUA, multi-spike trains (MSP), multi-unit activity (MUA), and local field potential time (LFPrms) and frequency signals (LFPstft) were recorded. Using linear system identification, imagined joint trajectories were estimated with 20 – 60% variance explained, with wrist flexion/extension predicted the best and pronation/supination the poorest. Statistically, decoding of MSP and LFPstft yielded estimates that equaled those of SUA. Including multiple signal types in a decoder increased prediction accuracy in all cases. We conclude that signals recorded from a single restricted region of the precentral gyrus in this person with tetraplegia contained useful information regarding the intended movements of upper extremity joints. PMID:22851229
The Petersen-Lincoln estimator and its extension to estimate the size of a shared population.
Chao, Anne; Pan, H-Y; Chiang, Shu-Chuan
2008-12-01
The Petersen-Lincoln estimator has been used to estimate the size of a population in a single mark release experiment. However, the estimator is not valid when the capture sample and recapture sample are not independent. We provide an intuitive interpretation for "independence" between samples based on 2 x 2 categorical data formed by capture/non-capture in each of the two samples. From the interpretation, we review a general measure of "dependence" and quantify the correlation bias of the Petersen-Lincoln estimator when two types of dependences (local list dependence and heterogeneity of capture probability) exist. An important implication in the census undercount problem is that instead of using a post enumeration sample to assess the undercount of a census, one should conduct a prior enumeration sample to avoid correlation bias. We extend the Petersen-Lincoln method to the case of two populations. This new estimator of the size of the shared population is proposed and its variance is derived. We discuss a special case where the correlation bias of the proposed estimator due to dependence between samples vanishes. The proposed method is applied to a study of the relapse rate of illicit drug use in Taiwan. ((c) 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim).
Comparative study on the gut microbiotas of four economically important Asian carp species.
Li, Xinghao; Yu, Yuhe; Li, Chang; Yan, Qingyun
2018-05-07
Gut microbiota of four economically important Asian carp species (silver carp, Hypophthalmichthys molitrix; bighead carp, Hypophthalmichthys nobilis; grass carp, Ctenopharyngodon idella; common carp, Cyprinus carpio) were compared using 16S rRNA gene pyrosequencing. Analysis of more than 590,000 quality-filtered sequences obtained from the foregut, midgut and hindgut of these four carp species revealed high microbial diversity among the samples. The foregut samples of grass carp exhibited more than 1,600 operational taxonomy units (OTUs) and the highest alpha-diversity index, followed by the silver carp foregut and midgut. Proteobacteria, Firmicutes, Bacteroidetes and Fusobacteria were the predominant phyla regardless of fish species or gut type. Pairwise (weighted) UniFrac distance-based permutational multivariate analysis of variance with fish species as a factor produced significant association (P<0.01). The gut microbiotas of all four carp species harbored saccharolytic or proteolytic microbes, likely in response to the differences in their feeding habits. In addition, extensive variations were also observed even within the same fish species. Our results indicate that the gut microbiotas of Asian carp depend on the exact species, even when the different species were cohabiting in the same environment. This study provides some new insights into developing commercial fish feeds and improving existing aquaculture strategies.
Stoffenmanager exposure model: company-specific exposure assessments using a Bayesian methodology.
van de Ven, Peter; Fransman, Wouter; Schinkel, Jody; Rubingh, Carina; Warren, Nicholas; Tielemans, Erik
2010-04-01
The web-based tool "Stoffenmanager" was initially developed to assist small- and medium-sized enterprises in the Netherlands to make qualitative risk assessments and to provide advice on control at the workplace. The tool uses a mechanistic model to arrive at a "Stoffenmanager score" for exposure. In a recent study it was shown that variability in exposure measurements given a certain Stoffenmanager score is still substantial. This article discusses an extension to the tool that uses a Bayesian methodology for quantitative workplace/scenario-specific exposure assessment. This methodology allows for real exposure data observed in the company of interest to be combined with the prior estimate (based on the Stoffenmanager model). The output of the tool is a company-specific assessment of exposure levels for a scenario for which data is available. The Bayesian approach provides a transparent way of synthesizing different types of information and is especially preferred in situations where available data is sparse, as is often the case in small- and medium sized-enterprises. Real-world examples as well as simulation studies were used to assess how different parameters such as sample size, difference between prior and data, uncertainty in prior, and variance in the data affect the eventual posterior distribution of a Bayesian exposure assessment.
Reduced Pain Sensation and Reduced BOLD Signal in Parietofrontal Networks during Religious Prayer
Elmholdt, Else-Marie; Skewes, Joshua; Dietz, Martin; Møller, Arne; Jensen, Martin S.; Roepstorff, Andreas; Wiech, Katja; Jensen, Troels S.
2017-01-01
Previous studies suggest that religious prayer can alter the experience of pain via expectation mechanisms. While brain processes related to other types of top-down modulation of pain have been studied extensively, no research has been conducted on the potential effects of active religious coping. Here, we aimed at investigating the neural mechanisms during pain modulation by prayer and their dependency on the opioidergic system. Twenty-eight devout Protestants performed religious prayer and a secular contrast prayer during painful electrical stimulation in two fMRI sessions. Naloxone or saline was administered prior to scanning. Results show that pain intensity was reduced by 11% and pain unpleasantness by 26% during religious prayer compared to secular prayer. Expectancy predicted large amounts (70–89%) of the variance in pain intensity. Neuroimaging results revealed reduced neural activity during religious prayer in a large parietofrontal network relative to the secular condition. Naloxone had no significant effect on ratings or neural activity. Our results thus indicate that, under these conditions, pain modulation by prayer is not opioid-dependent. Further studies should employ an optimized design to explore whether reduced engagement of the frontoparietal system could indicate that prayer may attenuate pain through a reduction in processing of pain stimulus saliency and prefrontal control rather than through known descending pain inhibitory systems. PMID:28701940
Reduced Pain Sensation and Reduced BOLD Signal in Parietofrontal Networks during Religious Prayer.
Elmholdt, Else-Marie; Skewes, Joshua; Dietz, Martin; Møller, Arne; Jensen, Martin S; Roepstorff, Andreas; Wiech, Katja; Jensen, Troels S
2017-01-01
Previous studies suggest that religious prayer can alter the experience of pain via expectation mechanisms. While brain processes related to other types of top-down modulation of pain have been studied extensively, no research has been conducted on the potential effects of active religious coping. Here, we aimed at investigating the neural mechanisms during pain modulation by prayer and their dependency on the opioidergic system. Twenty-eight devout Protestants performed religious prayer and a secular contrast prayer during painful electrical stimulation in two fMRI sessions. Naloxone or saline was administered prior to scanning. Results show that pain intensity was reduced by 11% and pain unpleasantness by 26% during religious prayer compared to secular prayer. Expectancy predicted large amounts (70-89%) of the variance in pain intensity. Neuroimaging results revealed reduced neural activity during religious prayer in a large parietofrontal network relative to the secular condition. Naloxone had no significant effect on ratings or neural activity. Our results thus indicate that, under these conditions, pain modulation by prayer is not opioid-dependent. Further studies should employ an optimized design to explore whether reduced engagement of the frontoparietal system could indicate that prayer may attenuate pain through a reduction in processing of pain stimulus saliency and prefrontal control rather than through known descending pain inhibitory systems.
NASA Astrophysics Data System (ADS)
Miller, G. R.; Gou, S.; Ferguson, I. M.; Maxwell, R. M.
2011-12-01
Savanna ecosystems present a well-known modeling challenge; understory grasses and overstory woody vegetation combine to form an open, heterogeneous canopy that creates strong spatial differences in soil moisture and evapotranspiration rates. In this analysis, we used ParFlow.CLM to create a stand-scale model of the Tonzi Ranch oak savanna, based on extensive topography, vegetation, soil, and hydrogeology data collected at the site. Measurements included canopy distribution and ground surface elevation from airborne Lidar, depth to groundwater from deep piezometers, soil and rock hydraulic conductivity, and leaf area index. We then compared the results to the site's long-term data records of radiative flux partitioning, obtained using the eddy-covariance method, and soil moisture, collected via a distributed network of capacitance probes. In order to obtain good agreement between the measured and modeled values, we identified several necessary modifications to the current CLM parameterization. These changes included the addition of a "winter grass" type and the alteration of the root structure and water stress functions to accommodate uptake of groundwater by deep roots. Finally, we compared variograms of site parameters and response variables and performed a scaling analysis relating ET and soil moisture variance to sampling size.
Action potential amplitude as a noninvasive indicator of motor unit-specific hypertrophy.
Pope, Zachary K; Hester, Garrett M; Benik, Franklin M; DeFreitas, Jason M
2016-05-01
Skeletal muscle fibers hypertrophy in response to strength training, with type II fibers generally demonstrating the greatest plasticity in regards to cross-sectional area (CSA). However, assessing fiber type-specific CSA in humans requires invasive muscle biopsies. With advancements in the decomposition of surface electromyographic (sEMG) signals recorded using multichannel electrode arrays, the firing properties of individual motor units (MUs) can now be detected noninvasively. Since action potential amplitude (APSIZE) has a documented relationship with muscle fiber size, as well as with its parent MU's recruitment threshold (RT) force, our purpose was to examine if MU APSIZE, as a function of its RT (i.e., the size principle), could potentially be used as a longitudinal indicator of MU-specific hypertrophy. By decomposing the sEMG signals from the vastus lateralis muscle of 10 subjects during maximal voluntary knee extensions, we noninvasively assessed the relationship between MU APSIZE and RT before and immediately after an 8-wk strength training intervention. In addition to significant increases in muscle size and strength (P < 0.02), our data show that training elicited an increase in MU APSIZE of high-threshold MUs. Additionally, a large portion of the variance (83.6%) in the change in each individual's relationship between MU APSIZE and RT was explained by training-induced changes in whole muscle CSA (obtained via ultrasonography). Our findings suggest that the noninvasive, electrophysiological assessment of longitudinal changes to MU APSIZE appears to reflect hypertrophy specific to MUs across the RT continuum. Copyright © 2016 the American Physiological Society.
ERIC Educational Resources Information Center
Rivera, William M.
This overview is composed of four major sections. Part I is a map of agricultural extension's "territory," that is, the definitions and systems. It discusses extension functions in agricultural production institutions and varying institutional settings, describes types of extension systems, and considers farmers' degree of influence on extension…
Macros for Educational Research.
ERIC Educational Resources Information Center
Woodrow, Janice E. J.
1988-01-01
Describes the design and operation of two macros written in the programming language of Microsoft's EXCEL for educational research applications. The first macro determines the frequency of responses to a Likert-type questionnaire or multiple-choice test; the second performs a one-way analysis of variance test. (Author/LRW)
NASA Technical Reports Server (NTRS)
Craig, R. G. (Principal Investigator)
1983-01-01
Richmond, Virginia and Denver, Colorado were study sites in an effort to determine the effect of autocorrelation on the accuracy of a parallelopiped classifier of LANDSAT digital data. The autocorrelation was assumed to decay to insignificant levels when sampled at distances of at least ten pixels. Spectral themes developed using blocks of adjacent pixels, and using groups of pixels spaced at least 10 pixels apart were used. Effects of geometric distortions were minimized by using only pixels from the interiors of land cover sections. Accuracy was evaluated for three classes; agriculture, residential and "all other"; both type 1 and type 2 errors were evaluated by means of overall classification accuracy. All classes give comparable results. Accuracy is approximately the same in both techniques; however, the variance in accuracy is significantly higher using the themes developed from autocorrelated data. The vectors of mean spectral response were nearly identical regardless of sampling method used. The estimated variances were much larger when using autocorrelated pixels.
Hansen, John P
2003-01-01
Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.
Design of instructions for evacuating disabled adults.
Boyce, Michael W; Al-Awar Smither, Janan; Fisher, Daniel O; Hancock, P A
2017-01-01
We investigated how the design of instructions can affect performance in preparing emergency stair travel devices for the evacuation of disable individuals. We had three hypotheses: 1) Design of instructions would account for a significant portion of explained performance variance, 2) Improvements in design of instructions would reduce time on task across device type and age group, and 3) There would be a performance decrement for older adults compared to younger adults based on the slowing of older adult information processing abilities. Results showed that design of instructions does indeed account for a large portion of explained variance in the operation of emergency stair travel devices, and that improvements in design of instructions can reduce time on task across device type and age group. However, encouragingly for real-world operations, results did not indicate any significant differences between older versus younger adults. We look to explore ways that individuals with disabilities can exploit these insights to enhance the performance of emergency stair travel devices for use. Copyright © 2016 Elsevier Ltd. All rights reserved.
Burmeister Getz, E; Carroll, K J; Mielke, J; Benet, L Z; Jones, B
2017-03-01
We previously demonstrated pharmacokinetic differences among manufacturing batches of a US Food and Drug Administration (FDA)-approved dry powder inhalation product (Advair Diskus 100/50) large enough to establish between-batch bio-inequivalence. Here, we provide independent confirmation of pharmacokinetic bio-inequivalence among Advair Diskus 100/50 batches, and quantify residual and between-batch variance component magnitudes. These variance estimates are used to consider the type I error rate of the FDA's current two-way crossover design recommendation. When between-batch pharmacokinetic variability is substantial, the conventional two-way crossover design cannot accomplish the objectives of FDA's statistical bioequivalence test (i.e., cannot accurately estimate the test/reference ratio and associated confidence interval). The two-way crossover, which ignores between-batch pharmacokinetic variability, yields an artificially narrow confidence interval on the product comparison. The unavoidable consequence is type I error rate inflation, to ∼25%, when between-batch pharmacokinetic variability is nonzero. This risk of a false bioequivalence conclusion is substantially higher than asserted by regulators as acceptable consumer risk (5%). © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.
Worrall, T A; Schmeckpeper, B J; Corvera, J S; Cotter, R J
2000-11-01
The primer oligomer base extension (PROBE) reaction, combined with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, is used to characterize HLA-DR2 polymorphism. Alleles are distinguished rapidly and accurately by measuring the mass of primer extension products at every known variable region of HLA-DR2 alleles. Since differentiation of alleles by PROBE relies on measuring differences in extension product mass rather than differences in hybridization properties, mistyped alleles resulting from nonspecific hybridization are absent. The method shows considerable potential for high-throughput screening of HLA-DR polymorphism in a chip-based format, including rapid tissue typing of unrelated volunteer donors.
Hitting rock bottom: morphological responses of bedrock-confined streams to a catastrophic flood
NASA Astrophysics Data System (ADS)
Baggs Sargood, M.; Cohen, T. J.; Thompson, C. J.; Croke, J.
2015-06-01
The role of extreme events in shaping the Earth's surface is one that has held the interests of Earth scientists for centuries. A catastrophic flood in a tectonically quiescent setting in eastern Australia in 2011 provides valuable insight into how semi-alluvial channels respond to such events. Field survey data (3 reaches) and desktop analyses (10 reaches) with catchment areas ranging from 0.5 to 168 km2 show that the predicted discharge for the 2011 event ranged from 415 to 933 m3 s-1, with unit stream power estimates of up to 1077 W m-2. Estimated entrainment relationships predict the mobility of the entire grain-size population, and field data suggest the localised mobility of boulders up to 4.8 m in diameter. Analysis of repeat lidar data demonstrates that all reaches (field and desktop) were areas of net degradation via extensive scouring of coarse-grained alluvium with a strong positive relationship between catchment area and normalised erosion (R2 = 0.72-0.74). The extensive scouring in the 2011 flood decreased thalweg variance significantly removing previous step pools and other coarse-grained in-channel units, forming lengths of plane-bed (cobble) reach morphology. This was also accompanied by the exposure of planar bedrock surfaces, marginal bedrock straths and bedrock steps. Post-flood field data indicate a slight increase in thalweg variance as a result of the smaller 2013 flood rebuilding the alluvial overprint with pool-riffle formation. However, the current form and distribution of channel morphological units does not conform to previous classifications of bedrock or headwater river systems. This variation in post-flood form indicates that in semi-alluvial systems extreme events are significant for re-setting the morphology of in-channel units and for exposing the underlying lithology to ongoing erosion.
Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C
2017-09-07
Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.
Differential activation of parts of the latissimus dorsi with various isometric shoulder exercises.
Park, Se-yeon; Yoo, Won-gyu
2014-04-01
As no study has examined whether the branches of the latissimus dorsi are activated differently in different exercises, we investigated intramuscular differences of components of the latissimus dorsi during various shoulder isometric exercises. Seventeen male subjects performed four isometric exercises: shoulder extension, adduction, internal rotation, and shoulder depression. Surface electromyography (sEMG) was used to collect data from the medial and lateral components of the latissimus dorsi during the isometric exercises. Two-way repeated analysis of variance with two within-subject factors (exercise condition and muscle branch) was used to determine the significance of differences between the branches, and which branch was activated more with the exercise variation. The root mean squared sEMG values for the muscles were normalized using the modified isolation equation (%Isolation) and maximum voluntary isometric contraction (%MVIC). Neither the %MVIC nor %Isolation data differed significantly between muscle branches, while there was a significant difference with exercise. %MVIC was significantly higher with shoulder extension, compared to the other isometric exercises. There was a significant correlation between exercise condition and muscle branch in the %Isolation data. Shoulder extension and adduction and internal rotation increased %Isolation of the medial latissimus dorsi more than shoulder depression. Shoulder depression had the highest value of %Isolation of the lateral latissimus dorsi compared to the other isometric exercises. Comparing the medial and lateral latissimus dorsi, the medial component was predominantly activated with shoulder extension, adduction, and internal rotation, and the lateral component with shoulder depression. Shoulder extension is effective for activating the latissimus dorsi regardless of the intramuscular branch. Copyright © 2014 Elsevier Ltd. All rights reserved.
75 FR 37523 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-29
... before July 29, 2010 to be assured of consideration. Bureau of Public Debt (BPD) OMB Number: 1535-0121... Governments. Estimated Total Burden Hours: 247 hours. OMB Number: 1535-0131. Type of Review: Extension without.... Estimated Total Burden Hours: 2,050 hours. OMB Number: 1535-0094. Type of Review: Extension without change...
Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations
NASA Astrophysics Data System (ADS)
Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.
2018-02-01
The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.
Optical double-locked semiconductor lasers
NASA Astrophysics Data System (ADS)
AlMulla, Mohammad
2018-06-01
Self-sustained period-one (P1) nonlinear dynamics of a semiconductor laser are investigated when both optical injection and modulation are applied for stable microwave frequency generation. Locking the P1 oscillation through modulation on the bias current, injection strength, or detuning frequency stabilizes the P1 oscillation. Through the phase noise variance, the different modulation types are compared. It is demonstrated that locking the P1 oscillation through optical modulation on the output of the master laser outperforms bias-current modulation of the slave laser. Master laser modulation shows wider P1-oscillation locking range and lower phase noise variance. The locking characteristics of the P1 oscillation also depend on the operating conditions of the optical injection system
Bujkiewicz, Sylwia; Riley, Richard D
2016-01-01
Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρB; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρB and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρB, which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data. PMID:26988929
Martin, Jordan S; Suarez, Scott A
2017-08-01
Interest in quantifying consistent among-individual variation in primate behavior, also known as personality, has grown rapidly in recent decades. Although behavioral coding is the most frequently utilized method for assessing primate personality, limitations in current statistical practice prevent researchers' from utilizing the full potential of their coding datasets. These limitations include the use of extensive data aggregation, not modeling biologically relevant sources of individual variance during repeatability estimation, not partitioning between-individual (co)variance prior to modeling personality structure, the misuse of principal component analysis, and an over-reliance upon exploratory statistical techniques to compare personality models across populations, species, and data collection methods. In this paper, we propose a statistical framework for primate personality research designed to address these limitations. Our framework synthesizes recently developed mixed-effects modeling approaches for quantifying behavioral variation with an information-theoretic model selection paradigm for confirmatory personality research. After detailing a multi-step analytic procedure for personality assessment and model comparison, we employ this framework to evaluate seven models of personality structure in zoo-housed bonobos (Pan paniscus). We find that differences between sexes, ages, zoos, time of observation, and social group composition contributed to significant behavioral variance. Independently of these factors, however, personality nonetheless accounted for a moderate to high proportion of variance in average behavior across observational periods. A personality structure derived from past rating research receives the strongest support relative to our model set. This model suggests that personality variation across the measured behavioral traits is best described by two correlated but distinct dimensions reflecting individual differences in affiliation and sociability (Agreeableness) as well as activity level, social play, and neophilia toward non-threatening stimuli (Openness). These results underscore the utility of our framework for quantifying personality in primates and facilitating greater integration between the behavioral ecological and comparative psychological approaches to personality research. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Impact of Retrograde Arch Extension in Acute Type B Aortic Dissection on Management and Outcomes.
Nauta, Foeke J H; Tolenaar, Jip L; Patel, Himanshu J; Appoo, Jehangir J; Tsai, Thomas T; Desai, Nimesh D; Montgomery, Daniel G; Mussa, Firas F; Upchurch, Gilbert R; Fattori, Rosella; Hughes, G Chad; Nienaber, Christoph A; Isselbacher, Eric M; Eagle, Kim A; Trimarchi, Santi
2016-12-01
Optimal management of acute type B aortic dissection with retrograde arch extension is controversial. The effect of retrograde arch extension on operative and long-term mortality has not been studied and is not incorporated into clinical treatment pathways. The International Registry of Acute Aortic Dissection was queried for all patients presenting with acute type B dissection and an identifiable primary intimal tear. Outcomes were stratified according to management for patients with and without retrograde arch extension. Kaplan-Meier survival curves were constructed. Between 1996 and 2014, 404 patients (mean age, 63.3 ± 13.9 years) were identified. Retrograde arch extension existed in 67 patients (16.5%). No difference in complicated presentation was noted (36.8% vs 31.7%, p = 0.46), as defined by limb or organ malperfusion, coma, rupture, and shock. Patients with or without retrograde arch extension received similar treatment, with medical management in 53.7% vs 56.5% (p = 0.68), endovascular treatment in 32.8% vs 31.1% (p = 0.78), open operation in 11.9% vs 9.5% (p = 0.54), or hybrid approach in 1.5% vs 3.0% (p = 0.70), respectively. The in-hospital mortality rate was similar for patients with (10.7%) and without (10.4%) retrograde arch extension (p = 0.96), and 5-year survival was also similar at 78.3% and 77.8%, respectively (p = 0.27). The incidence of retrograde arch dissection involves approximately 16% of patients with acute type B dissection. In the International Registry of Acute Aortic Dissection, this entity seems not to affect management strategy or early and late death. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Agreement among High School Diving Judges.
ERIC Educational Resources Information Center
Stewart, Michael J.; Blair, William O.
1982-01-01
Raters' agreement and the relative consistency of diving judges at a boy's competition were analyzed using intraclass correlations within 16 position x type combinations. Judges' variance was significant for 5 of the 16 combinations. Point estimates were generally greater for consistency than for raters' agreement about scores. (Author/CM)
Park, So-Yeon; Kim, Il Han; Ye, Sung-Joon; Carlson, Joel; Park, Jong Min
2014-11-01
Texture analysis on fluence maps was performed to evaluate the degree of modulation for volumetric modulated arc therapy (VMAT) plans. A total of six textural features including angular second moment, inverse difference moment, contrast, variance, correlation, and entropy were calculated for fluence maps generated from 20 prostate and 20 head and neck VMAT plans. For each of the textural features, particular displacement distances (d) of 1, 5, and 10 were adopted. To investigate the deliverability of each VMAT plan, gamma passing rates of pretreatment quality assurance, and differences in modulating parameters such as multileaf collimator (MLC) positions, gantry angles, and monitor units at each control point between VMAT plans and dynamic log files registered by the Linac control system during delivery were acquired. Furthermore, differences between the original VMAT plan and the plan reconstructed from the dynamic log files were also investigated. To test the performance of the textural features as indicators for the modulation degree of VMAT plans, Spearman's rank correlation coefficients (rs) with the plan deliverability were calculated. For comparison purposes, conventional modulation indices for VMAT including the modulation complexity score for VMAT, leaf travel modulation complexity score, and modulation index supporting station parameter optimized radiation therapy (MISPORT) were calculated, and their correlations were analyzed in the same way. There was no particular textural feature which always showed superior correlations with every type of plan deliverability. Considering the results comprehensively, contrast (d = 1) and variance (d = 1) generally showed considerable correlations with every type of plan deliverability. These textural features always showed higher correlations to the plan deliverability than did the conventional modulation indices, except in the case of modulating parameter differences. The rs values of contrast to the global gamma passing rates with criteria of 2%/2 mm, 2%/1 mm, and 1%/2 mm were 0.536, 0.473, and 0.718, respectively. The respective values for variance were 0.551, 0.481, and 0.688. In the case of local gamma passing rates, the rs values of contrast were 0.547, 0.578, and 0.620, respectively, and those of variance were 0.519, 0.527, and 0.569. All of the rs values in those cases were statistically significant (p < 0.003). In the cases of global and local gamma passing rates, MISPORT showed the highest correlations among the conventional modulation indices. For global passing rates, rs values of MISPORT were -0.420, -0.330, and -0.632, respectively, and those for local passing rates were -0.455, -0.490 and -0.502. The values of rs of contrast, variance, and MISPORT with the MLC errors were -0.863, -0.828, and 0.795, respectively, all with statistical significances (p < 0.001). The correlations with statistical significances between variance and dose-volumetric differences were observed more frequently than the others. The contrast (d = 1) and variance (d = 1) calculated from fluence maps of VMAT plans showed considerable correlations with the plan deliverability, indicating their potential use as indicators for assessing the degree of modulation of VMAT plans. Both contrast and variance consistently showed better performance than the conventional modulation indices for VMAT.
SOME DUALITY THEOREMS FOR CYCLOTOMIC \\Gamma-EXTENSIONS OF ALGEBRAIC NUMBER FIELDS OF CM TYPE
NASA Astrophysics Data System (ADS)
Kuz'min, L. V.
1980-06-01
For an odd prime l and a cyclotomic \\Gamma{-}l-extension k_\\infty/k of a field k of CM type, a compact periodic \\Gamma-module A_l(k), analogous to the Tate module of a function field, is defined. The analog of the Weil scalar product is constructed on the module A_l(k). The properties of this scalar product are examined, and certain other duality relations are determined on A_l(k). It is proved that, in a finite l-extension k'/k of CM type, the \\mathbf{Z}_l-ranks of A_l(k) and A_l(k') are connected by a relation similar to the Hurwitz formula for the genus of a curve.Bibliography: 7 titles.
Discontinuity of the annuity curves. III. Two types of vital variability in Drosophila melanogaster.
Bychkovskaia, I B; Mylnikov, S V; Mozhaev, G A
2016-01-01
We confirm five-phased construction of Drosophila annuity curves established earlier. Annuity curves were composed of stable five-phase component and variable one. Variable component was due to differences in phase durations. As stable, so variable components were apparent for 60 generations. Stochastic component was described as well. Viability variance which characterize «reaction norm» was apparent for all generation as well. Thus, both types of variability seem to be inherited.
Rhodes, Kirsty M; Turner, Rebecca M; Higgins, Julian P T
2015-01-01
Estimation of between-study heterogeneity is problematic in small meta-analyses. Bayesian meta-analysis is beneficial because it allows incorporation of external evidence on heterogeneity. To facilitate this, we provide empirical evidence on the likely heterogeneity between studies in meta-analyses relating to specific research settings. Our analyses included 6,492 continuous-outcome meta-analyses within the Cochrane Database of Systematic Reviews. We investigated the influence of meta-analysis settings on heterogeneity by modeling study data from all meta-analyses on the standardized mean difference scale. Meta-analysis setting was described according to outcome type, intervention comparison type, and medical area. Predictive distributions for between-study variance expected in future meta-analyses were obtained, which can be used directly as informative priors. Among outcome types, heterogeneity was found to be lowest in meta-analyses of obstetric outcomes. Among intervention comparison types, heterogeneity was lowest in meta-analyses comparing two pharmacologic interventions. Predictive distributions are reported for different settings. In two example meta-analyses, incorporating external evidence led to a more precise heterogeneity estimate. Heterogeneity was influenced by meta-analysis characteristics. Informative priors for between-study variance were derived for each specific setting. Our analyses thus assist the incorporation of realistic prior information into meta-analyses including few studies. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Glycotoxin and Autoantibodies Are Additive Environmentally Determined Predictors of Type 1 Diabetes
Beyan, Huriya; Riese, Harriette; Hawa, Mohammed I.; Beretta, Guisi; Davidson, Howard W.; Hutton, John C.; Burger, Huibert; Schlosser, Michael; Snieder, Harold; Boehm, Bernhard O.; Leslie, R. David
2012-01-01
In type 1 diabetes, diabetes-associated autoantibodies, including islet cell antibodies (ICAs), reflect adaptive immunity, while increased serum Nε-carboxymethyl-lysine (CML), an advanced glycation end product, is associated with proinflammation. We assessed whether serum CML and autoantibodies predicted type 1 diabetes and to what extent they were determined by genetic or environmental factors. Of 7,287 unselected schoolchildren screened, 115 were ICA+ and were tested for baseline CML and diabetes autoantibodies and followed (for median 7 years), whereas a random selection (n = 2,102) had CML tested. CML and diabetes autoantibodies were determined in a classic twin study of twin pairs discordant for type 1 diabetes (32 monozygotic, 32 dizygotic pairs). CML was determined by enzyme-linked immunosorbent assay, autoantibodies were determined by radioimmunoprecipitation, ICA was determined by indirect immunofluorescence, and HLA class II genotyping was determined by sequence-specific oligonucleotides. CML was increased in ICA+ and prediabetic schoolchildren and in diabetic and nondiabetic twins (all P < 0.001). Elevated levels of CML in ICA+ children were a persistent, independent predictor of diabetes progression, in addition to autoantibodies and HLA risk. In twins model fitting, familial environment explained 75% of CML variance, and nonshared environment explained all autoantibody variance. Serum CML, a glycotoxin, emerged as an environmentally determined diabetes risk factor, in addition to autoimmunity and HLA genetic risk, and a potential therapeutic target. PMID:22396204
Identifying Variations in Hydraulic Conductivity on the East River at Crested Butte, CO
NASA Astrophysics Data System (ADS)
Ulmer, K. N.; Malenda, H. F.; Singha, K.
2016-12-01
Slug tests are a widely used method to measure saturated hydraulic conductivity, or how easily water flows through an aquifer, by perturbing the piezometric surface and measuring the time the local groundwater table takes to re-equilibrate. Saturated hydraulic conductivity is crucial to calculating the speed and direction of groundwater movement. Therefore, it is important to document data variance from in situ slug tests. This study addresses two potential sources of data variability: different users and different types of slug used. To test for user variability, two individuals slugged the same six wells with water multiple times at a stream meander on the East River near Crested Butte, CO. To test for variations in type of slug test, multiple water and metal slug tests were performed at a single well in the same meander. The distributions of hydraulic conductivities of each test were then tested for variance using both the Kruskal-Wallis test and the Brown-Forsythe test. When comparing the hydraulic conductivity distributions gathered by the two individuals, we found that they were statistically similar. However, we found that the two types of slug tests produced hydraulic conductivity distributions for the same well that are statistically dissimilar. In conclusion, multiple people should be able to conduct slug tests without creating any considerable variations in the resulting hydraulic conductivity values, but only a single type of slug should be used for those tests.
Landini, Fernando
2016-12-01
Psychology has great potential for contributing to rural development, particularly through supporting rural extension (RE). In this paper, the types of expectations extensionists have of psychology are identified, as well as possible ways of integrating psychosocial knowledge into the RE context. Rural extensionists from 12 Latin American countries were surveyed (n = 654). Of them, 89.4 % considered psychology could contribute to rural extension and commented on how this would be possible. Expectations were categorised and the nine mentioned by more than 20 % of them were utilized to conduct a two-steps cluster analysis. Three types of extensionists' expectations were identified: one wherein working with extensionists was highlighted; another characterised by a focus on working with farmers; and a third featuring a traditional, diffusionist extension approach, which views farmers as objects of psychologists' interventions. With the first type, psychologists should not neglect working with farmers and with the second, with extensionists. With the third type, reflecting on the expectations themselves and their underlying assumptions seems essential.
Whole-animal metabolic rate is a repeatable trait: a meta-analysis.
Nespolo, Roberto F; Franco, Marcela
2007-06-01
Repeatability studies are gaining considerable interest among physiological ecologists, particularly in traits affected by high environmental/residual variance, such as whole-animal metabolic rate (MR). The original definition of repeatability, known as the intraclass correlation coefficient, is computed from the components of variance obtained in a one-way ANOVA on several individuals from which two or more measurements are performed. An alternative estimation of repeatability, popular among physiological ecologists, is the Pearson product-moment correlation between two consecutive measurements. However, despite the more than 30 studies reporting repeatability of MR, so far there is not a definite synthesis indicating: (1) whether repeatability changes in different types of animals; (2) whether some kinds of metabolism are more repeatable than others; and most important, (3) whether metabolic rate is significantly repeatable. We performed a meta-analysis to address these questions, as well as to explore the historical trend in repeatability studies. Our results show that metabolic rate is significantly repeatable and its effect size is not statistically affected by any of the mentioned factors (i.e. repeatability of MR does not change in different species, type of metabolism, time between measurements, and number of individuals). The cumulative meta-analysis revealed that repeatability studies in MR have already reached an asymptotical effect size with no further change either in its magnitude and/or variance (i.e. additional studies will not contribute significantly to the estimator). There was no evidence of strong publication bias.
Knotter, Maartje H; Wissink, Inge B; Moonen, Xavier M H; Stams, Geert-Jan J M; Jansen, Gerard J
2013-05-01
Data were collected from 121 staff members (20 direct support staff teams) on background characteristics of the individual staff members and their teams (gender, age, years of work experience, position and education), the frequency and form of aggression of clients with an intellectual disability (verbal or physical), staff members' attitudes towards aggression, and the types of behavioural interventions they executed (providing personal space and behavioural boundary-setting, restricting freedom and the use of coercive measures). Additionally, client group characteristics (age of clients, type of care and client's level of intellectual disability) were assessed. Multilevel analyses (individual and contextual level) were performed to examine the relations between all studied variables and the behavioural interventions. The results showed that for providing personal space and behavioural boundary-setting as well as for restricting freedom, the proportion of variance explained by the context (staff team and client group characteristics) was three times larger than the proportion of variance explained by individual staff member characteristics. For using coercive measures, the context even accounted for 66% of the variance, whereas only 8% was explained by individual staff member characteristics. A negative attitude towards aggression of the direct support team as a whole proved to be an especially strong predictor of using coercive measures. To diminish the use of coercive measures, interventions should therefore be directed towards influencing the attitude of direct support teams instead of individual staff members. Copyright © 2013 Elsevier Ltd. All rights reserved.
Genetic and environmental factors affecting perinatal and preweaning survival of D'man lambs.
Boujenane, Ismaïl; Chikhi, Abdelkader; Lakcher, Oumaïma; Ibnelbachyr, Mustapha
2013-08-01
This study examined the viability of 4,554 D'man lambs born alive at Errachidia research station in south-eastern Morocco between 1988 and 2009. Lamb survival to 1, 10, 30 and 90 days old was 0.95, 0.93, 0.93 and 0.92, respectively. The majority of deaths (85.7%) occurred before 10 days of age. Type and period of birth both had a significant effect on lamb survival traits, whereas age of dam and sex of lamb did not. The study revealed a curvilinear relationship between lamb's birth weight and survival traits from birth to 90 days, with optimal birth weights for maximal perinatal and preweaning survival varying according to type of birth from 2.6 to 3.5 kg. Estimation of variance components, using an animal model including direct and maternal genetic effects, the permanent maternal environment as well as fixed effects, showed that direct and maternal heritability estimates for survival traits between birth and 90 days were mostly low and varied from 0.01 to 0.10; however, direct heritability for survival at 1 day from birth was estimated at 0.63. Genetic correlations between survival traits and birth weight were positive and low to moderate. It was concluded that survival traits of D'man lambs between birth and 90 days could be improved through selection, but genetic progress would be low. However, the high proportion of the residual variance to total variance reinforces the need to improve management and lambing conditions.
García-Izquierdo, Mariano; Ríos-Rísquez, María Isabel
2012-01-01
The purpose of this study was to examine the relationship and predictive power of various psychosocial job stressors for the 3 dimensions of burnout in emergency departments. This study was structured as a cross-sectional design, with a questionnaire as the tool. The data were gathered using an anonymous questionnaire in 3 hospitals in Spain. The sample consisted of 191 emergency departments. Burnout was evaluated by the Maslach Burnout Inventory and the job stressors by the Nursing Stress Scale. The Burnout Model in this study consisted of 3 dimensions: emotional exhaustion, cynicism, and reduced professional efficacy. The model that predicted the emotional exhaustion dimension was formed by 2 variables: Excessive workload and lack of emotional support. These 2 variables explained 19.4% of variance in emotional exhaustion. Cynicism had 4 predictors that explained 25.8% of variance: Interpersonal conflicts, lack of social support, excessive workload, and type of contract. Finally, variability in reduced professional efficacy was predicted by 3 variables: Interpersonal conflicts, lack of social support, and the type of shift worked, which explained 10.4% of variance. From the point of view of nurse leaders, organizational interventions, and the management of human resources, this analysis of the principal causes of burnout is particularly useful to select, prioritize, and implement preventive measures that will improve the quality of care offered to patients and the well-being of personnel. Copyright © 2012 Elsevier Inc. All rights reserved.
Stretchy binary classification.
Toh, Kar-Ann; Lin, Zhiping; Sun, Lei; Li, Zhengguo
2018-01-01
In this article, we introduce an analytic formulation for compressive binary classification. The formulation seeks to solve the least ℓ p -norm of the parameter vector subject to a classification error constraint. An analytic and stretchable estimation is conjectured where the estimation can be viewed as an extension of the pseudoinverse with left and right constructions. Our variance analysis indicates that the estimation based on the left pseudoinverse is unbiased and the estimation based on the right pseudoinverse is biased. Sparseness can be obtained for the biased estimation under certain mild conditions. The proposed estimation is investigated numerically using both synthetic and real-world data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Map synchronization in optical communication systems
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.; Mohanty, N.
1973-01-01
The time synchronization problem in an optical communication system is approached as a problem of estimating the arrival time (delay variable) of a known transmitted field. Maximum aposteriori (MAP) estimation procedures are used to generate optimal estimators, with emphasis placed on their interpretation as a practical system device, Estimation variances are used to aid in the design of the transmitter signals for best synchronization. Extension is made to systems that perform separate acquisition and tracking operations during synchronization. The closely allied problem of maintaining timing during pulse position modulation is also considered. The results have obvious application to optical radar and ranging systems, as well as the time synchronization problem.
Blind identification of image manipulation type using mixed statistical moments
NASA Astrophysics Data System (ADS)
Jeong, Bo Gyu; Moon, Yong Ho; Eom, Il Kyu
2015-01-01
We present a blind identification of image manipulation types such as blurring, scaling, sharpening, and histogram equalization. Motivated by the fact that image manipulations can change the frequency characteristics of an image, we introduce three types of feature vectors composed of statistical moments. The proposed statistical moments are generated from separated wavelet histograms, the characteristic functions of the wavelet variance, and the characteristic functions of the spatial image. Our method can solve the n-class classification problem. Through experimental simulations, we demonstrate that our proposed method can achieve high performance in manipulation type detection. The average rate of the correctly identified manipulation types is as high as 99.22%, using 10,800 test images and six manipulation types including the authentic image.
Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.
ERIC Educational Resources Information Center
Glutting, Joseph J.; McDermott, Paul A.
1990-01-01
Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…
Spectral mixture modeling: Further analysis of rock and soil types at the Viking Lander sites
NASA Technical Reports Server (NTRS)
Adams, John B.; Smith, Milton O.
1987-01-01
A new image processing technique was applied to Viking Lander multispectral images. Spectral endmembers were defined that included soil, rock and shade. Mixtures of these endmembers were found to account for nearly all the spectral variance in a Viking Lander image.
New Statistical Techniques for Evaluating Longitudinal Models.
ERIC Educational Resources Information Center
Murray, James R.; Wiley, David E.
A basic methodological approach in developmental studies is the collection of longitudinal data. Behavioral data cen take at least two forms, qualitative (or discrete) and quantitative. Both types are fallible. Measurement errors can occur in quantitative data and measures of these are based on error variance. Qualitative or discrete data can…
Scientists, especially environmental scientists often encounter trace level concentrations that are typically reported as less than a certain limit of detection, L. Type 1, left-censored data arise when certain low values lying below L are ignored or unknown as they cannot be mea...
Educational Policy Making in the State Legislature: Legislator as Policy Expert.
ERIC Educational Resources Information Center
Weaver, Sue Wells; Geske, Terry G.
1997-01-01
Examines the legislator's role as education policy expert in the legislative policymaking process. In a study of Louisiana state legislators, analysis of variance was used to determine expert legislators' degree of influence in formulating educational policy, given differences in policy types, information sources, and legislators' work roles.…
The error structure of the SMAP single and dual channel soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
Knowledge of the temporal error structure for remotely-sensed surface soil moisture retrievals can improve our ability to exploit them for hydrology and climate studies. This study employs a triple collocation type analysis to investigate both the total variance and temporal auto-correlation of erro...
Reputational Challenges for Business Schools: A Contextual Perspective
ERIC Educational Resources Information Center
Siebert, Sabina; Martin, Graeme
2013-01-01
Purpose: The dominant variance theory approaches to researching business school reputations are based on a positivistic hypothetico-deductive research methodology and do not adequately take into account either the different levels and types of contexts in which business schools operate or the diversity of stakeholder interests. The aim of this…
49 CFR 570.5 - Service brake system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CFR 571.105, on every new passenger car manufactured on or after January 1, 1968, and on other types... equipment manufacturer's specifications. Note the left to right brake force variance. (2) Road test. The..., inspecting front brake hoses through all wheel positions from full left to full right for conditions...
49 CFR 570.5 - Service brake system.
Code of Federal Regulations, 2011 CFR
2011-10-01
... CFR 571.105, on every new passenger car manufactured on or after January 1, 1968, and on other types... equipment manufacturer's specifications. Note the left to right brake force variance. (2) Road test. The..., inspecting front brake hoses through all wheel positions from full left to full right for conditions...
WE-AB-207A-12: HLCC Based Quantitative Evaluation Method of Image Artifact in Dental CBCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y; Wu, S; Qi, H
Purpose: Image artifacts are usually evaluated qualitatively via visual observation of the reconstructed images, which is susceptible to subjective factors due to the lack of an objective evaluation criterion. In this work, we propose a Helgason-Ludwig consistency condition (HLCC) based evaluation method to quantify the severity level of different image artifacts in dental CBCT. Methods: Our evaluation method consists of four step: 1) Acquire Cone beam CT(CBCT) projection; 2) Convert 3D CBCT projection to fan-beam projection by extracting its central plane projection; 3) Convert fan-beam projection to parallel-beam projection utilizing sinogram-based rebinning algorithm or detail-based rebinning algorithm; 4) Obtain HLCCmore » profile by integrating parallel-beam projection per view and calculate wave percentage and variance of the HLCC profile, which can be used to describe the severity level of image artifacts. Results: Several sets of dental CBCT projections containing only one type of artifact (i.e. geometry, scatter, beam hardening, lag and noise artifact), were simulated using gDRR, a GPU tool developed for efficient, accurate, and realistic simulation of CBCT Projections. These simulated CBCT projections were used to test our proposed method. HLCC profile wave percentage and variance induced by geometry distortion are about 3∼21 times and 16∼393 times as large as that of the artifact-free projection, respectively. The increase factor of wave percentage and variance are 6 and133 times for beam hardening, 19 and 1184 times for scatter, and 4 and16 times for lag artifacts, respectively. In contrast, for noisy projection the wave percentage, variance and inconsistency level are almost the same with those of the noise-free one. Conclusion: We have proposed a quantitative evaluation method of image artifact based on HLCC theory. According to our simulation results, the severity of different artifact types is found to be in a following order: Scatter>Geometry>Beam hardening>Lag>Noise>Artifact-free in dental CBCT.« less
Mausfeld, Rainer; Andres, Johannes
2002-01-01
We argue, from an ethology-inspired perspective, that the internal concepts 'surface colours' and 'illumination colours' are part of the data format of two different representational primitives. Thus, the internal concept of 'colour' is not a unitary one but rather refers to two different types of 'data structure', each with its own proprietary types of parameters and relations. The relation of these representational structures is modulated by a class of parameterised transformations whose effects are mirrored in the idealised computational achievements of illumination invariance of colour codes, on the one hand, and scene invariance, on the other hand. Because the same characteristics of a light array reaching the eye can be physically produced in many different ways, the visual system, then, has to make an 'inference' whether a chromatic deviation of the space-averaged colour codes from the neutral point is due to a 'non-normal', ie chromatic, illumination or due to an imbalanced spectral reflectance composition. We provide evidence that the visual system uses second-order statistics of chromatic codes of a single view of a scene in order to modulate corresponding transformations. In our experiments we used centre surround configurations with inhomogeneous surrounds given by a random structure of overlapping circles, referred to as Seurat configurations. Each family of surrounds has a fixed space-average of colour codes, but differs with respect to the covariance matrix of colour codes of pixels that defines the chromatic variance along some chromatic axis and the covariance between luminance and chromatic channels. We found that dominant wavelengths of red-green equilibrium settings of the infield exhibited a stable and strong dependence on the chromatic variance of the surround. High variances resulted in a tendency towards 'scene invariance', low variances in a tendency towards 'illumination invariance' of the infield.
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.
2012-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach practical. We recommend its use in lieu of the less accurate approach in the conventional group analysis. PMID:22245637
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... Partnership, Limited; Notice of Request for Extension of Time to Commence and Complete Construction and.... Application Type: Request for Extension of Time. b. Project No.: 12187-016. c. Date Filed: December 8, 2010. d...-year extension of time from the existing deadline of July 28, 2011 to July 28, 2013 to commence project...
Control algorithms for dynamic attenuators.
Hsieh, Scott S; Pelc, Norbert J
2014-06-01
The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.
Sample and population exponents of generalized Taylor's law.
Giometto, Andrea; Formentin, Marco; Rinaldo, Andrea; Cohen, Joel E; Maritan, Amos
2015-06-23
Taylor's law (TL) states that the variance V of a nonnegative random variable is a power function of its mean M; i.e., V = aM(b). TL has been verified extensively in ecology, where it applies to population abundance, physics, and other natural sciences. Its ubiquitous empirical verification suggests a context-independent mechanism. Sample exponents b measured empirically via the scaling of sample mean and variance typically cluster around the value b = 2. Some theoretical models of population growth, however, predict a broad range of values for the population exponent b pertaining to the mean and variance of population density, depending on details of the growth process. Is the widely reported sample exponent b ≃ 2 the result of ecological processes or could it be a statistical artifact? Here, we apply large deviations theory and finite-sample arguments to show exactly that in a broad class of growth models the sample exponent is b ≃ 2 regardless of the underlying population exponent. We derive a generalized TL in terms of sample and population exponents b(jk) for the scaling of the kth vs. the jth cumulants. The sample exponent b(jk) depends predictably on the number of samples and for finite samples we obtain b(jk) ≃ k = j asymptotically in time, a prediction that we verify in two empirical examples. Thus, the sample exponent b ≃ 2 may indeed be a statistical artifact and not dependent on population dynamics under conditions that we specify exactly. Given the broad class of models investigated, our results apply to many fields where TL is used although inadequately understood.
Increasing precision of turbidity-based suspended sediment concentration and load estimates.
Jastram, John D; Zipper, Carl E; Zelazny, Lucian W; Hyer, Kenneth E
2010-01-01
Turbidity is an effective tool for estimating and monitoring suspended sediments in aquatic systems. Turbidity can be measured in situ remotely and at fine temporal scales as a surrogate for suspended sediment concentration (SSC), providing opportunity for a more complete record of SSC than is possible with physical sampling approaches. However, there is variability in turbidity-based SSC estimates and in sediment loadings calculated from those estimates. This study investigated the potential to improve turbidity-based SSC, and by extension the resulting sediment loading estimates, by incorporating hydrologic variables that can be monitored remotely and continuously (typically 15-min intervals) into the SSC estimation procedure. On the Roanoke River in southwestern Virginia, hydrologic stage, turbidity, and other water-quality parameters were monitored with in situ instrumentation; suspended sediments were sampled manually during elevated turbidity events; samples were analyzed for SSC and physical properties including particle-size distribution and organic C content; and rainfall was quantified by geologic source area. The study identified physical properties of the suspended-sediment samples that contribute to SSC estimation variance and hydrologic variables that explained variability of those physical properties. Results indicated that the inclusion of any of the measured physical properties in turbidity-based SSC estimation models reduces unexplained variance. Further, the use of hydrologic variables to represent these physical properties, along with turbidity, resulted in a model, relying solely on data collected remotely and continuously, that estimated SSC with less variance than a conventional turbidity-based univariate model, allowing a more precise estimate of sediment loading, Modeling results are consistent with known mechanisms governing sediment transport in hydrologic systems.
NASA Astrophysics Data System (ADS)
Larry, Triaka A.
The need for more diversity in STEM-related careers and college majors is urgent. Self-efficacy and student-teacher relationships are factors that have been linked to influencing students’ pursuit of subject-specific careers and academic achievement. The impact of self-efficacy and student perceptions of teacher interpersonal behaviors on student achievement have been extensively researched in the areas of Mathematics and English, however, most studies using science achievement, as a criterion variable, were conducted using non-diverse, White upper middle class to affluent participants. In order to determine the strength of relationships between perceived science self-efficacy, and student perceptions of teacher interpersonal behaviors as factors that influence science achievement (science GPA), the Science Self-Efficacy Questionnaire (SSEQ) and Questionnaire on Teacher Interactions (QTI) were administered to twelfth grade students enrolled at a highly diverse urban Title I high school, while controlling for demographics, defined as gender, ethnicity, and minority status. Using a hierarchical multiple linear regression analysis, results demonstrated that the predictor variables (i.e., gender, ethnicity, minority status, science self-efficacy, and teacher interpersonal behaviors) accounted for 20.8% of the variance in science GPAs. Science self-efficacy made the strongest unique contribution to explaining science GPA, while minority status and gender were found to be statistically significant contributors to the full model as well. Ethnicity and teacher interpersonal behaviors did not make a statistically significant contribution to the variance in science GPA, and accounted for ≤ 1% of the variance. Implications and recommendations for future research are subsequently given.
2011-01-01
Background Biologists studying adaptation under sexual selection have spent considerable effort assessing the relative importance of two groups of models, which hinge on the idea that females gain indirect benefits via mate discrimination. These are the good genes and genetic compatibility models. Quantitative genetic studies have advanced our understanding of these models by enabling assessment of whether the genetic architectures underlying focal phenotypes are congruent with either model. In this context, good genes models require underlying additive genetic variance, while compatibility models require non-additive variance. Currently, we know very little about how the expression of genotypes comprised of distinct parental haplotypes, or how levels and types of genetic variance underlying key phenotypes, change across environments. Such knowledge is important, however, because genotype-environment interactions can have major implications on the potential for evolutionary responses to selection. Results We used a full diallel breeding design to screen for complex genotype-environment interactions, and genetic architectures underlying key morphological traits, across two thermal environments (the lab standard 27°C, and the cooler 23°C) in the Australian field cricket, Teleogryllus oceanicus. In males, complex three-way interactions between sire and dam parental haplotypes and the rearing environment accounted for up to 23 per cent of the scaled phenotypic variance in the traits we measured (body mass, pronotum width and testes mass), and each trait harboured significant additive genetic variance in the standard temperature (27°C) only. In females, these three-way interactions were less important, with interactions between the paternal haplotype and rearing environment accounting for about ten per cent of the phenotypic variance (in body mass, pronotum width and ovary mass). Of the female traits measured, only ovary mass for crickets reared at the cooler temperature (23°C), exhibited significant levels of additive genetic variance. Conclusions Our results show that the genetics underlying phenotypic expression can be complex, context-dependent and different in each of the sexes. We discuss the implications of these results, particularly in terms of the evolutionary processes that hinge on good and compatible genes models. PMID:21791118
Balancing Accession and Retention (Navy Comprehensive Compensation Study)
1982-09-01
training graduate of quality type i SEXTE ^. STAYER. the ratio of the number of extensions of less than 1 year to the number of extensions of 1...rating and quality type who appear in LOS 5 can be calculated: ^5ij ELIG li X^^j ELIG]^^^ REUP^j(BMULT.) + X^j^j EXTE^.(BMULTj) + X^^. ELIG]^^ SEXTE ...in the jth rating, EXTEj_j( BMULT.) SEXTE ^. = probability that an eligible individual of quality type i in rating j chooses to extend his
Sagittal back motion of college football athletes and nonathletes.
Strong, L R; Titlow, L
1997-08-01
The study was designed as an ex post facto study using volunteers. To compare sagittal back motion of male college athletes with that of nonathletes and to compare data from both groups with normative data. Few studies have evaluated athletic demands on the spine. Much of the information on athletic demands comes from electromyographic studies, flexibility comparisons, and lift task studies. Although these studies provide a basis for back testing and evaluation, they do not present direct evidence of athletic low back performance. Fifteen male college football athletes and 15 male college nonathletes volunteered for testing using the IsoStation B-200 BSCAN 2.0 protocol (Isotechnologies, Inc., Hillsborough, NC). Measures were recorded for range of motion, isometric flexion and extension, and moderate and high dynamic flexion and extension. Data were analyzed using multivariate analysis of variance. The results of Hotelling's multivariate test were significant. Univariate follow-up analysis showed that athletes had significantly better isometric flexion, isometric extension, moderate dynamic flexion, high dynamic flexion, and high dynamic extension. Athletic data were compared with the BSCAN population data at the 50th and 80th percentile. Athletes were significantly better (P < 0.007) for all variables at the 50th percentile and for all dynamic variables at the 80th percentile. Within the limitations of the study, college football athletes had better sagittal back motion strength and speed as tested with the B-200 than nonathletes. Population data for the B-200 were representative for nonathletes but nonrepresentative for football players.
The Relationship Between Maximum Isometric Strength and Ball Velocity in the Tennis Serve
Corbi, Francisco; Fuentes, Juan Pedro; Fernández-Fernández, Jaime
2016-01-01
Abstract The aims of this study were to analyze the relationship between maximum isometric strength levels in different upper and lower limb joints and serve velocity in competitive tennis players as well as to develop a prediction model based on this information. Twelve male competitive tennis players (mean ± SD; age: 17.2 ± 1.0 years; body height: 180.1 ± 6.2 cm; body mass: 71.9 ± 5.6 kg) were tested using maximum isometric strength levels (i.e., wrist, elbow and shoulder flexion and extension; leg and back extension; shoulder external and internal rotation). Serve velocity was measured using a radar gun. Results showed a strong positive relationship between serve velocity and shoulder internal rotation (r = 0.67; p < 0.05). Low to moderate correlations were also found between serve velocity and wrist, elbow and shoulder flexion – extension, leg and back extension and shoulder external rotation (r = 0.36 – 0.53; p = 0.377 – 0.054). Bivariate and multivariate models for predicting serve velocity were developed, with shoulder flexion and internal rotation explaining 55% of the variance in serve velocity (r = 0.74; p < 0.001). The maximum isometric strength level in shoulder internal rotation was strongly related to serve velocity, and a large part of the variability in serve velocity was explained by the maximum isometric strength levels in shoulder internal rotation and shoulder flexion. PMID:28149411
Strength, mobility and falling in women referred to a geriatric outpatient clinic.
Janssen, Hennie C J P; Samson, Monique M; Meeuwsen, Ingrid B A E; Duursma, Sijmen A; Verhaar, Harald J J
2004-04-01
Mobility impairment and falling have a multifactorial etiology in frail older people. Muscle weakness is one of the risk factors and is accessible to intervention. The aim of this study was to determine the most important contributors of mobility and indicators of fall occurrence in women referred to a geriatric outpatient clinic. Mobility was assessed using the Timed 'Get-Up-and-Go' test (TGUG) and the modified Coopertest (COOP). Falling was assessed retrospectively and isometric knee extension force was measured using fixed dynamometry. Habitual physical activity was quantified using a questionnaire for the elderly. Height, weight, medical conditions and current medication were recorded. Isometric knee extension strength and habitual physical activity, which consisted predominantly of household work, were independent variables of performance on TGUG and COOP and together explained 57% of the variance in TGUG (r=0.75, p<0.001), and 64% of that in COOP, (r=0.80, p<0.001). Age, total number of medical conditions, and presence of cardiovascular disease were not significant in the model. Women in the lowest tertile of knee extension strength had a significantly higher probability of falling (0.75, 95% CI 0.56-0.91) compared with women in the highest tertile (0.27, 95% CI 0.14-0.50). Knee extension strength remains a strong determinant of mobility and fall occurrence in women referred to a geriatric outpatient clinic. Performing light to moderate household work remains independently associated with functional mobility.
Möldner, Meike; Unglaub, Frank; Hahn, Peter; Müller, Lars P; Bruckner, Thomas; Spies, Christian K
2015-02-01
To investigate functional and subjective outcome parameters after arthroscopic debridement of central articular disc lesions (Palmer type 2C) and to correlate these findings with ulna length. Fifty patients (15 men; 35 women; mean age, 47 y) with Palmer type 2C lesions underwent arthroscopic debridement. Nine of these patients (3 men; 6 women; mean static ulnar variance, 2.4 mm; SD, 0.5 mm) later underwent ulnar shortening osteotomy because of persistent pain and had a mean follow-up of 36 months. Mean follow-up was 38 months for patients with debridement only (mean static ulnar variance, 0.5 mm; SD, 1.2 mm). Examination parameters included range of motion, grip and pinch strengths, pain (visual analog scale), and functional outcome scores (Modified Mayo Wrist score [MMWS] and Disabilities of the Arm, Shoulder, and Hand [DASH] questionnaire). Patients who had debridement only reached a DASH questionnaire score of 18 and an MMWS of 89 with significant pain reduction from 7.6 to 2.0 on the visual analog scale. Patients with additional ulnar shortening reached a DASH questionnaire score of 18 and an MMWS of 88, with significant pain reduction from 7.4 to 2.5. Neither surgical treatment compromised grip and pinch strength in comparison with the contralateral side. We identified 1.8 mm or more of positive ulnar variance as an indication for early ulnar shortening in the case of persistent ulnar-sided wrist pain after arthroscopic debridement. Arthroscopic debridement was a sufficient and reliable treatment option for the majority of patients with Palmer type 2C lesions. Because reliable predictors of the necessity for ulnar shortening are lacking, we recommend arthroscopic debridement as a first-line treatment for all triangular fibrocartilage 2C lesions, and, in the presence of persistent ulnar-sided wrist pain, ulnar shortening osteotomy after an interval of 6 months. Ulnar shortening proved to be sufficient and safe for these patients. Patients with persistent ulnar-sided wrist pain after debridement who had preoperative static positive ulnar variance of 1.8 mm or more may be treated by ulnar shortening earlier in order to spare them prolonged symptoms. Therapeutic IV. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-26
... forms of information technology; and ways to further reduce the information collection burden on small... of 1996--CC Docket No. 96-98. Form Number: N/A. Type of Review: Extension of a currently approved.... 95-116. Form Number: N/A. Type of Review: Extension of a currently approved collection. Respondents...
Effects of emotional valence and arousal on the voice perception network
Kotz, Sonja A.; Belin, Pascal
2017-01-01
Abstract Several theories conceptualise emotions along two main dimensions: valence (a continuum from negative to positive) and arousal (a continuum that varies from low to high). These dimensions are typically treated as independent in many neuroimaging experiments, yet recent behavioural findings suggest that they are actually interdependent. This result has impact on neuroimaging design, analysis and theoretical development. We were interested in determining the extent of this interdependence both behaviourally and neuroanatomically, as well as teasing apart any activation that is specific to each dimension. While we found extensive overlap in activation for each dimension in traditional emotion areas (bilateral insulae, orbitofrontal cortex, amygdalae), we also found activation specific to each dimension with characteristic relationships between modulations of these dimensions and BOLD signal change. Increases in arousal ratings were related to increased activations predominantly in voice-sensitive cortices after variance explained by valence had been removed. In contrast, emotions of extreme valence were related to increased activations in bilateral voice-sensitive cortices, hippocampi, anterior and midcingulum and medial orbito- and superior frontal regions after variance explained by arousal had been accounted for. Our results therefore do not support a complete segregation of brain structures underpinning the processing of affective dimensions. PMID:28449127
Yura, Harold T; Fields, Renny A
2011-06-20
Level crossing statistics is applied to the complex problem of atmospheric turbulence-induced beam wander for laser propagation from ground to space. A comprehensive estimate of the single-axis wander angle temporal autocorrelation function and the corresponding power spectrum is used to develop, for the first time to our knowledge, analytic expressions for the mean angular level crossing rate and the mean duration of such crossings. These results are based on an extension and generalization of a previous seminal analysis of the beam wander variance by Klyatskin and Kon. In the geometrical optics limit, we obtain an expression for the beam wander variance that is valid for both an arbitrarily shaped initial beam profile and transmitting aperture. It is shown that beam wander can disrupt bidirectional ground-to-space laser communication systems whose small apertures do not require adaptive optics to deliver uniform beams at their intended target receivers in space. The magnitude and rate of beam wander is estimated for turbulence profiles enveloping some practical laser communication deployment options and suggesting what level of beam wander effects must be mitigated to demonstrate effective bidirectional laser communication systems.
Landsman, V; Lou, W Y W; Graubard, B I
2015-05-20
We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Pluck or Luck: Does Trait Variation or Chance Drive Variation in Lifetime Reproductive Success?
Snyder, Robin E; Ellner, Stephen P
2018-04-01
While there has been extensive interest in how intraspecific trait variation affects ecological processes, outcomes are highly variable even when individuals are identical: some are lucky, while others are not. Trait variation is therefore important only if it adds substantially to the variability produced by luck. We ask when trait variation has a substantial effect on variability in lifetime reproductive success (LRS), using two approaches: (1) we partition the variation in LRS into contributions from luck and trait variation and (2) we ask what can be inferred about an individual's traits and with what certainty, given their observed LRS. In theoretical stage- and size-structured models and two empirical case studies, we find that luck usually dominates the variance of LRS. Even when individuals differ substantially in ways that affect expected LRS, unless the effects of luck are substantially reduced (e.g., low variability in reproductive life span or annual fecundity), most variance in lifetime outcomes is due to luck, implying that departures from "null" models omitting trait variation will be hard to detect. Luck also obscures the relationship between realized LRS and individual traits. While trait variation may influence the fate of populations, luck often governs the lives of individuals.
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
Krüger, Stephanie; Bagby, R Michael; Höffler, Jürgen; Bräunig, Peter
2003-01-01
Catatonia is a frequent psychomotor syndrome, which has received increasing recognition over the last decade. The assessment of the catatonic syndrome requires systematic rating scales that cover the complex spectrum of catatonic motor signs and behaviors. The Catatonia Rating Scale (CRS) is such an instrument, which has been validated and which has undergone extensive reliability testing. In the present study, to further validate the CRS, the items composing this scale were submitted to principal components factor extraction followed by a varimax rotation. An analysis of variance (ANOVA) was performed to assess group differences on the extracted factors in patients with schizophrenia, pure mania, mixed mania, and major depression (N=165). Four factors were extracted, which accounted for 71.5% of the variance. The factors corresponded to the clinical syndromes of (1) catatonic excitement, (2) abnormal involuntary movements/mannerisms, (3) disturbance of volition/catalepsy, and (4) catatonic inhibition. The ANOVA revealed that each of the groups showed a distinctive catatonic symptom pattern and that the overlap between diagnostic groups was minimal. We conclude that this four-factor symptom structure of catatonia challenges the current conceptualization, which proposes only two symptom subtypes.
Power Measurement Errors on a Utility Aircraft
NASA Technical Reports Server (NTRS)
Bousman, William G.
2002-01-01
Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.
The role of lexical variables in the visual recognition of Chinese characters: A megastudy analysis.
Sze, Wei Ping; Yap, Melvin J; Rickard Liow, Susan J
2015-01-01
Logographic Chinese orthography partially represents both phonology and semantics. By capturing the online processing of a large pool of Chinese characters, we were able to examine the relative salience of specific lexical variables when this nonalphabetic script is read. Using a sample of native mainland Chinese speakers (N = 35), lexical decision latencies for 1560 single characters were collated into a database, before the effects of a comprehensive range of variables were explored. Hierarchical regression analyses determined the unique item-level variance explained by orthographic (frequency, stroke count), semantic (age of learning, imageability, number of meanings), and phonological (consistency, phonological frequency) factors. Orthographic and semantic variables, respectively, accounted for more collective variance than the phonological variables. Significant main effects were further observed for the individual orthographic and semantic predictors. These results are consistent with the idea that skilled readers tend to rely on orthographic and semantic information when processing visually presented characters. This megastudy approach marks an important extension to existing work on Chinese character recognition, which hitherto has relied on factorial designs. Collectively, the findings reported here represent a useful set of empirical constraints for future computational models of character recognition.
Wen, Zaidao; Hou, Zaidao; Jiao, Licheng
2017-11-01
Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.
NASA Astrophysics Data System (ADS)
Tjiputra, Jerry F.; Polzin, Dierk; Winguth, Arne M. E.
2007-03-01
An adjoint method is applied to a three-dimensional global ocean biogeochemical cycle model to optimize the ecosystem parameters on the basis of SeaWiFS surface chlorophyll observation. We showed with identical twin experiments that the model simulated chlorophyll concentration is sensitive to perturbation of phytoplankton and zooplankton exudation, herbivore egestion as fecal pellets, zooplankton grazing, and the assimilation efficiency parameters. The assimilation of SeaWiFS chlorophyll data significantly improved the prediction of chlorophyll concentration, especially in the high-latitude regions. Experiments that considered regional variations of parameters yielded a high seasonal variance of ecosystem parameters in the high latitudes, but a low variance in the tropical regions. These experiments indicate that the adjoint model is, despite the many uncertainties, generally capable to optimize sensitive parameters and carbon fluxes in the euphotic zone. The best fit regional parameters predict a global net primary production of 36 Pg C yr-1, which lies within the range suggested by Antoine et al. (1996). Additional constraints of nutrient data from the World Ocean Atlas showed further reduction in the model-data misfit and that assimilation with extensive data sets is necessary.
Jeran, S; Steinbrecher, A; Pischon, T
2016-08-01
Activity-related energy expenditure (AEE) might be an important factor in the etiology of chronic diseases. However, measurement of free-living AEE is usually not feasible in large-scale epidemiological studies but instead has traditionally been estimated based on self-reported physical activity. Recently, accelerometry has been proposed for objective assessment of physical activity, but it is unclear to what extent this methods explains the variance in AEE. We conducted a systematic review searching MEDLINE database (until 2014) on studies that estimated AEE based on accelerometry-assessed physical activity in adults under free-living conditions (using doubly labeled water method). Extracted study characteristics were sample size, accelerometer (type (uniaxial, triaxial), metrics (for example, activity counts, steps, acceleration), recording period, body position, wear time), explained variance of AEE (R(2)) and number of additional predictors. The relation of univariate and multivariate R(2) with study characteristics was analyzed using nonparametric tests. Nineteen articles were identified. Examination of various accelerometers or subpopulations in one article was treated separately, resulting in 28 studies. Sample sizes ranged from 10 to 149. In most studies the accelerometer was triaxial, worn at the trunk, during waking hours and reported activity counts as output metric. Recording periods ranged from 5 to 15 days. The variance of AEE explained by accelerometer-assessed physical activity ranged from 4 to 80% (median crude R(2)=26%). Sample size was inversely related to the explained variance. Inclusion of 1 to 3 other predictors in addition to accelerometer output significantly increased the explained variance to a range of 12.5-86% (median total R(2)=41%). The increase did not depend on the number of added predictors. We conclude that there is large heterogeneity across studies in the explained variance of AEE when estimated based on accelerometry. Thus, data on predicted AEE based on accelerometry-assessed physical activity need to be interpreted cautiously.
Biochemical Phenotypes to Discriminate Microbial Subpopulations and Improve Outbreak Detection
Galar, Alicia; Kulldorff, Martin; Rudnick, Wallis; O'Brien, Thomas F.; Stelling, John
2013-01-01
Background Clinical microbiology laboratories worldwide constitute an invaluable resource for monitoring emerging threats and the spread of antimicrobial resistance. We studied the growing number of biochemical tests routinely performed on clinical isolates to explore their value as epidemiological markers. Methodology/Principal Findings Microbiology laboratory results from January 2009 through December 2011 from a 793-bed hospital stored in WHONET were examined. Variables included patient location, collection date, organism, and 47 biochemical and 17 antimicrobial susceptibility test results reported by Vitek 2. To identify biochemical tests that were particularly valuable (stable with repeat testing, but good variability across the species) or problematic (inconsistent results with repeat testing), three types of variance analyses were performed on isolates of K. pneumonia: descriptive analysis of discordant biochemical results in same-day isolates, an average within-patient variance index, and generalized linear mixed model variance component analysis. Results: 4,200 isolates of K. pneumoniae were identified from 2,485 patients, 32% of whom had multiple isolates. The first two variance analyses highlighted SUCT, TyrA, GlyA, and GGT as “nuisance” biochemicals for which discordant within-patient test results impacted a high proportion of patient results, while dTAG had relatively good within-patient stability with good heterogeneity across the species. Variance component analyses confirmed the relative stability of dTAG, and identified additional biochemicals such as PHOS with a large between patient to within patient variance ratio. A reduced subset of biochemicals improved the robustness of strain definition for carbapenem-resistant K. pneumoniae. Surveillance analyses suggest that the reduced biochemical profile could improve the timeliness and specificity of outbreak detection algorithms. Conclusions The statistical approaches explored can improve the robust recognition of microbial subpopulations with routinely available biochemical test results, of value in the timely detection of outbreak clones and evolutionarily important genetic events. PMID:24391936
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
A VLBI variance-covariance analysis interactive computer program. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bock, Y.
1980-01-01
An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.
Waist Circumference Adjusted for Body Mass Index and Intra-Abdominal Fat Mass
Berentzen, Tina Landsvig; Ängquist, Lars; Kotronen, Anna; Borra, Ronald; Yki-Järvinen, Hannele; Iozzo, Patricia; Parkkola, Riitta; Nuutila, Pirjo; Ross, Robert; Allison, David B.; Heymsfield, Steven B.; Overvad, Kim; Sørensen, Thorkild I. A.; Jakobsen, Marianne Uhre
2012-01-01
Background The association between waist circumference (WC) and mortality is particularly strong and direct when adjusted for body mass index (BMI). One conceivable explanation for this association is that WC adjusted for BMI is a better predictor of the presumably most harmful intra-abdominal fat mass (IAFM) than WC alone. We studied the prediction of abdominal subcutaneous fat mass (ASFM) and IAFM by WC alone and by addition of BMI as an explanatory factor. Methodology/Principal Findings WC, BMI and magnetic resonance imaging data from 742 men and women who participated in clinical studies in Canada and Finland were pooled. Total adjusted squared multiple correlation coefficients (R2) of ASFM and IAFM were calculated from multiple linear regression models with WC and BMI as explanatory variables. Mean BMI and WC of the participants in the pooled sample were 30 kg/m2 and 102 cm, respectively. WC explained 29% of the variance in ASFM and 51% of the variance in IAFM. Addition of BMI to WC added 28% to the variance explained in ASFM, but only 1% to the variance explained in IAFM. Results in subgroups stratified by study center, sex, age, obesity level and type 2 diabetes status were not systematically different. Conclusion/Significance The prediction of IAFM by WC is not improved by addition of BMI. PMID:22384179
Martin, Bryn A; Yiallourou, Theresia I; Pahlavian, Soroush Heidari; Thyagaraj, Suraj; Bunck, Alexander C; Loth, Francis; Sheffer, Daniel B; Kröger, Jan Robert; Stergiopulos, Nikolaos
2016-05-01
For the first time, inter-operator dependence of MRI based computational fluid dynamics (CFD) modeling of cerebrospinal fluid (CSF) in the cervical spinal subarachnoid space (SSS) is evaluated. In vivo MRI flow measurements and anatomy MRI images were obtained at the cervico-medullary junction of a healthy subject and a Chiari I malformation patient. 3D anatomies of the SSS were reconstructed by manual segmentation by four independent operators for both cases. CFD results were compared at nine axial locations along the SSS in terms of hydrodynamic and geometric parameters. Intraclass correlation (ICC) assessed the inter-operator agreement for each parameter over the axial locations and coefficient of variance (CV) compared the percentage of variance for each parameter between the operators. Greater operator dependence was found for the patient (0.19 < ICC < 0.99) near the craniovertebral junction compared to the healthy subject (ICC > 0.78). For the healthy subject, hydraulic diameter and Womersley number had the least variance (CV = ~2%). For the patient, peak diastolic velocity and Reynolds number had the smallest variance (CV = ~3%). These results show a high degree of inter-operator reliability for MRI-based CFD simulations of CSF flow in the cervical spine for healthy subjects and a lower degree of reliability for patients with Type I Chiari malformation.
Martin, Bryn A.; Yiallourou, Theresia I.; Pahlavian, Soroush Heidari; Thyagaraj, Suraj; Bunck, Alexander C.; Loth, Francis; Sheffer, Daniel B.; Kröger, Jan Robert; Stergiopulos, Nikolaos
2015-01-01
For the first time, inter-operator dependence of MRI based computational fluid dynamics (CFD) modeling of cerebrospinal fluid (CSF) in the cervical spinal subarachnoid space (SSS) is evaluated. In vivo MRI flow measurements and anatomy MRI images were obtained at the cervico-medullary junction of a healthy subject and a Chiari I malformation patient. 3D anatomies of the SSS were reconstructed by manual segmentation by four independent operators for both cases. CFD results were compared at nine axial locations along the SSS in terms of hydrodynamic and geometric parameters. Intraclass correlation (ICC) assessed the inter-operator agreement for each parameter over the axial locations and coefficient of variance (CV) compared the percentage of variance for each parameter between the operators. Greater operator dependence was found for the patient (0.19
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-26
... Collection for the Workforce Investment Act (WIA) Management Information and Reporting System; Extension With... changes that are being requested in the extension with revisions to the WIA Management Information and.... III. Current Actions Type of Review: Extension with revisions. Title: WIA Management Information and...
Extensions to the Speech Disorders Classification System (SDCS)
ERIC Educational Resources Information Center
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.…
Re-estimating sample size in cluster randomised trials with active recruitment within clusters.
van Schie, S; Moerbeek, M
2014-08-30
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Wang, Yunyun; Liu, Ye; Deng, Xinli; Cong, Yulong; Jiang, Xingyu
2016-12-15
Although conventional enzyme-linked immunosorbent assays (ELISA) and related assays have been widely applied for the diagnosis of diseases, many of them suffer from large error variance for monitoring the concentration of targets over time, and insufficient limit of detection (LOD) for assaying dilute targets. We herein report a readout mode of ELISA based on the binding between peptidic β-sheet structure and Congo Red. The formation of peptidic β-sheet structure is triggered by alkaline phosphatase (ALP). For the detection of P-Selectin which is a crucial indicator for evaluating thrombus diseases in clinic, the 'β-sheet and Congo Red' mode significantly decreases both the error variance and the LOD (from 9.7ng/ml to 1.1 ng/ml) of detection, compared with commercial ELISA (an existing gold-standard method for detecting P-Selectin in clinic). Considering the wide range of ALP-based antibodies for immunoassays, such novel method could be applicable to the analysis of many types of targets. Copyright © 2016 Elsevier B.V. All rights reserved.
Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher
2010-11-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.
McDermott, Máirtín S; Sharma, Rajeev
2017-12-01
The methods employed to measure behaviour in research testing the theories of reasoned action/planned behaviour (TRA/TPB) within the context of health behaviours have the potential to significantly bias findings. One bias yet to be examined in that literature is that due to common method variance (CMV). CMV introduces a variance in scores attributable to the method used to measure a construct, rather than the construct it represents. The primary aim of this study was to evaluate the impact of method bias on the associations of health behaviours with TRA/TPB variables. Data were sourced from four meta-analyses (177 studies). The method used to measure behaviour for each effect size was coded for susceptibility to bias. The moderating impact of method type was assessed using meta-regression. Method type significantly moderated the associations of intentions, attitudes and social norms with behaviour, but not that between perceived behavioural control and behaviour. The magnitude of the moderating effect of method type appeared consistent between cross-sectional and prospective studies, but varied across behaviours. The current findings strongly suggest that method bias significantly inflates associations in TRA/TPB research, and poses a potentially serious validity threat to the cumulative findings reported in that field.
Fuchs, Lynn S.; Geary, David C.; Compton, Donald L.; Fuchs, Douglas; Hamlett, Carol L.; Seethaler, Pamela M.; Bryant, Joan D.; Schatschneider, Christopher
2010-01-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (n=280; 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations (PCs), and word problems (WPs) in fall and then reassessed on PCs and WPs in spring. Development was indexed via latent change scores, and the interplay between numerical and domain-general abilities was analyzed via multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of PC and WP development. Yet, for PC development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for WP development, the set of domain- general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive. PMID:20822213
NASA Astrophysics Data System (ADS)
De Linage, C.; Famiglietti, J. S.; Randerson, J. T.
2013-12-01
Floods and droughts frequently affect the Amazon River basin, impacting the transportation, river navigation, agriculture, economy and the carbon balance and biodiversity of several South American countries. The present study aims to find the main variables controlling the natural interannual variability of terrestrial water storage in the Amazon region and to propose a modeling framework for flood and drought forecasting. We propose three simple empirical models using a linear combination of lagged spatial averages of central Pacific (Niño 4 index) and tropical North Atlantic (TNAI index) sea surface temperatures (SST) to predict a decade-long record of 3°, monthly terrestrial water storage anomalies (TWSA) observed by the Gravity Recovery And Climate Experiment (GRACE) mission. In addition to a SST forcing term, the models included a relaxation term to simulate the memory of water storage anomalies in response to external variability in forcing. Model parameters were spatially-variable and individually optimized for each 3° grid cell. We also investigated the evolution of the predictive capability of our models with increasing minimum lead times for TWSA forecasts. TNAI was the primary external forcing for the central and western regions of the southern Amazon (35% of variance explained with a 3-month forecast), whereas Niño 4 was dominant in the northeastern part of the basin (61% of variance explained with a 3-month forecast). Forcing the model with a combination of the two indices improved the fit significantly (p<0.05) for at least 64% of the grid cells, compared to models forced solely with Niño 4 or TNAI. The combined model was able to explain 43% of the variance in the Amazon basin as a whole with a 3-month lead time. While 66% of the observed variance was explained in the northeastern Amazon, only 39% of the variance was captured by the combined model in the central and western regions, suggesting that other, more local, forcing sources were important in these regions. The predictive capability of the combined model was monotonically degraded with increasing lead times. Degradation was smaller in the northeastern Amazon (where 49% of the variance was explained using a 8-month lead time versus 69% for a 1 month lead time) compared to the western and central regions of southern Amazon (where 22% of the variance was explained at 8 months versus 43% at 1 month). Our model may provide early warning information about flooding in the northeastern region of the Amazon basin, where floodplain areas are extensive and the sensitivity of floods to external SST forcing was shown to be high. This work also strengthens our understanding of the mechanisms regulating interannual variability in Amazon fires, as TWSA deficits may subsequently lead to atmospheric water vapor deficits and reduced cloudiness via water-limited evapotranspiration. Finally, this work helps to bridge the gap between the current GRACE mission and the follow-on gravity mission.
Jung, Hungu; Yamasaki, Masahiro
2016-12-08
Reduced lower extremity range of motion (ROM) and muscle strength are related to functional disability in older adults who cannot perform one or more activities of daily living (ADL) independently. The purpose of this study was to determine which factors of seven lower extremity ROMs and two muscle strengths play dominant roles in the physical performance of community-dwelling older women. Ninety-five community-dwelling older women (mean age ± SD, 70.7 ± 4.7 years; age range, 65-83 years) were enrolled in this study. Seven lower extremity ROMs (hip flexion, hip extension, knee flexion, internal and external hip rotation, ankle dorsiflexion, and ankle plantar flexion) and two muscle strengths (knee extension and flexion) were measured. Physical performance tests, including functional reach test (FRT), 5 m gait test, four square step test (FSST), timed up and go test (TUGT), and five times sit-to-stand test (FTSST) were performed. Stepwise regression models for each of the physical performance tests revealed that hip extension ROM and knee flexion strength were important explanatory variables for FRT, FSST, and FTSST. Furthermore, ankle plantar flexion ROM and knee extension strength were significant explanatory variables for the 5 m gait test and TUGT. However, ankle dorsiflexion ROM was a significant explanatory variable for FRT alone. The amount of variance on stepwise multiple regression for the five physical performance tests ranged from 25 (FSST) to 47% (TUGT). Hip extension, ankle dorsiflexion, and ankle plantar flexion ROMs, as well as knee extension and flexion strengths may play primary roles in the physical performance of community-dwelling older women. Further studies should assess whether specific intervention programs targeting older women may achieve improvements in lower extremity ROM and muscle strength, and thereby play an important role in the prevention of dependence on daily activities and loss of physical function, particularly focusing on hip extension, ankle dorsiflexion, and ankle plantar flexion ROMs as well as knee extension and flexion strength.
Enhancing the prediction of self-handicapping.
Harris, R N; Snyder, C R; Higgins, R L; Schrag, J L
1986-12-01
Levels of test anxiety, Type A and Type B coronary-prone behavior, fear of failure, and covert self-esteem were studied as predictors of self-handicapping performance attributions for college women who were placed in either a high- (N = 49) or low- (N = 49) evaluative test or task situation. We hypothesized that test anxiety. Type A or Type B level, and their interaction would account for reliable variance in the prediction of self-handicapping. However, we also theorized that underlying high fear of failure and low covert self-esteem would explain the self-handicapping claims of test-anxious and Type A subjects. The results indicated that only high levels of test anxiety and high levels of covert self-esteem were related to women's self-handicapping attributions.
Stochastic model of cell rearrangements in convergent extension of ascidian notochord
NASA Astrophysics Data System (ADS)
Lubkin, Sharon; Backes, Tracy; Latterman, Russell; Small, Stephen
2007-03-01
We present a discrete stochastic cell based model of convergent extension of the ascidian notochord. Our work derives from research that clarifies the coupling of invagination and convergent extension in ascidian notochord morphogenesis (Odell and Munro, 2002). We have tested the roles of cell-cell adhesion, cell-extracellular matrix adhesion, random motion, and extension of individual cells, as well as the presence or absence of various tissue types, and determined which factors are necessary and/or sufficient for convergent extension.
Empirical methods in the evaluation of estimators
Gerald S. Walton; C.J. DeMars; C.J. DeMars
1973-01-01
The authors discuss the problem of selecting estimators of density and survival by making use of data on a forest-defoliating larva, the spruce budworm. Varlous estimators are compared. The results show that, among the estimators considered, ratio-type estimators are superior in terms of bias and variance. The methods used in making comparisons, particularly simulation...
Planned Comparisons as Better Alternatives to ANOVA Omnibus Tests.
ERIC Educational Resources Information Center
Benton, Roberta L.
Analyses of data are presented to illustrate the advantages of using a priori or planned comparisons rather than omnibus analysis of variance (ANOVA) tests followed by post hoc or posteriori testing. The two types of planned comparisons considered are planned orthogonal non-trend coding contrasts and orthogonal polynomial or trend contrast coding.…
Documentation for the 2003-04 Schools and Staffing Survey. NCES 2007-337
ERIC Educational Resources Information Center
Tourkin, Steven C.; Warner, Toni; Parmer, Randall; Cole, Cornette; Jackson, Betty; Zukerberg, Andrew; Cox, Shawna; Soderberg, Andrew
2007-01-01
This report serves as the survey documentation for the design and implementation of the 2003-04 Schools and Staffing Survey. Topics covered include the sample design, survey methodology, data collection procedures, data processing, response rates, imputation procedures, weighting and variance estimation, review of the quality of data, the types of…
More Powerful Tests of Simple Interaction Contrasts in the Two-Way Factorial Design
ERIC Educational Resources Information Center
Hancock, Gregory R.; McNeish, Daniel M.
2017-01-01
For the two-way factorial design in analysis of variance, the current article explicates and compares three methods for controlling the Type I error rate for all possible simple interaction contrasts following a statistically significant interaction, including a proposed modification to the Bonferroni procedure that increases the power of…
ERIC Educational Resources Information Center
Ramseyer, Gary C.; Tcheng, Tse-Kia
The present study was directed at determining the extent to which the Type I Error rate is affected by violations in the basic assumptions of the q statistic. Monte Carlo methods were employed, and a variety of departures from the assumptions were examined. (Author)
Generalizability of Scaling Gradients on Direct Behavior Ratings
ERIC Educational Resources Information Center
Chafouleas, Sandra M.; Christ, Theodore J.; Riley-Tillman, T. Chris
2009-01-01
Generalizability theory is used to examine the impact of scaling gradients on a single-item Direct Behavior Rating (DBR). A DBR refers to a type of rating scale used to efficiently record target behavior(s) following an observation occasion. Variance components associated with scale gradients are estimated using a random effects design for persons…
An Analytic Comparison of Effect Sizes for Differential Item Functioning
ERIC Educational Resources Information Center
Demars, Christine E.
2011-01-01
Three types of effects sizes for DIF are described in this exposition: log of the odds-ratio (differences in log-odds), differences in probability-correct, and proportion of variance accounted for. Using these indices involves conceptualizing the degree of DIF in different ways. This integrative review discusses how these measures are impacted in…
Implicit and Explicit Attitudes of Educators toward the Emotional Disturbance Label
ERIC Educational Resources Information Center
Jones, James Patrick
2009-01-01
This study examined implicit and explicit attitudes of teachers toward the Emotional Disturbance (ED) label, the strength of association between implicit and explicit ratings, and the variance in attitudes between different types of teachers or among teachers in different settings. Ninety-eight teachers (52 regular education and 46 special…
Regional melt-pond fraction and albedo of thin Arctic first-year drift ice in late summer
NASA Astrophysics Data System (ADS)
Divine, D. V.; Granskog, M. A.; Hudson, S. R.; Pedersen, C. A.; Karlsen, T. I.; Divina, S. A.; Renner, A. H. H.; Gerland, S.
2015-02-01
The paper presents a case study of the regional (≈150 km) morphological and optical properties of a relatively thin, 70-90 cm modal thickness, first-year Arctic sea ice pack in an advanced stage of melt. The study combines in situ broadband albedo measurements representative of the four main surface types (bare ice, dark melt ponds, bright melt ponds and open water) and images acquired by a helicopter-borne camera system during ice-survey flights. The data were collected during the 8-day ICE12 drift experiment carried out by the Norwegian Polar Institute in the Arctic, north of Svalbard at 82.3° N, from 26 July to 3 August 2012. A set of > 10 000 classified images covering about 28 km2 revealed a homogeneous melt across the study area with melt-pond coverage of ≈ 0.29 and open-water fraction of ≈ 0.11. A decrease in pond fractions observed in the 30 km marginal ice zone (MIZ) occurred in parallel with an increase in open-water coverage. The moving block bootstrap technique applied to sequences of classified sea-ice images and albedo of the four surface types yielded a regional albedo estimate of 0.37 (0.35; 0.40) and regional sea-ice albedo of 0.44 (0.42; 0.46). Random sampling from the set of classified images allowed assessment of the aggregate scale of at least 0.7 km2 for the study area. For the current setup configuration it implies a minimum set of 300 images to process in order to gain adequate statistics on the state of the ice cover. Variance analysis also emphasized the importance of longer series of in situ albedo measurements conducted for each surface type when performing regional upscaling. The uncertainty in the mean estimates of surface type albedo from in situ measurements contributed up to 95% of the variance of the estimated regional albedo, with the remaining variance resulting from the spatial inhomogeneity of sea-ice cover.
Patton, Susana R; Dolan, Lawrence M; Smith, Laura B; Thomas, Inas H; Powers, Scott W
2011-12-01
Parents of young children with type 1 diabetes (T1DM) maintain full responsibility for their child's daily diabetes self-care and thus may be vulnerable to experiencing parenting stress. This study examined several psychological correlates of pediatric parenting stress in parents of young children with T1DM. Parents of 39 young children with T1DM (ages 2-7 years) completed measures of pediatric parenting stress, mealtime behavior problems, depressive symptoms, and fear of hypoglycemia. For parents of young children, higher stress frequency and difficulty were associated with higher parental depressive symptoms and fear. Regression analyses identified that 58% of the variance in stress frequency was associated with parental depressive symptoms. For stress difficulty, 68% of the variance was associated with parental depressive symptoms and fear. Pediatric parenting stress is common in parents of young children with T1DM. Stress and the psychological correlates measured in this study are amenable to intervention and should be regularly assessed in parents of young children with T1DM.
NASA Technical Reports Server (NTRS)
Hoffer, R. M. (Principal Investigator); Knowlton, D. J.; Dean, M. E.
1981-01-01
Supervised and cluster block training statistics were used to analyze the thematic mapper simulation MSS data (both 1979 and 1980 data sets). Cover information classes identified on SAR imagery include: hardwood, pine, mixed pine hardwood, clearcut, pasture, crops, emergent crops, bare soil, urban, and water. Preliminary analysis of the HH and HV polarized SAR data indicate a high variance associated with each information class except for water and bare soil. The large variance for most spectral classes suggests that while the means might be statistically separable, an overlap may exist between the classes which could introduce a significant classification error. The quantitative values of many cover types are much larger on the HV polarization than on the HH, thereby indicating the relative nature of the digitized data values. The mean values of the spectral classes in the areas with larger look angles are greater than the means of the same cover type in other areas having steeper look angles. Difficulty in accurately overlaying the dual polarization of the SAR data was resolved.
Yan, Mian; Or, Calvin
2017-08-01
This study tested a structural model examining the effects of perceived usefulness, perceived ease of use, attitude, subjective norm, perceived behavioral control, health consciousness, and application-specific self-efficacy on the acceptance (i.e. behavioral intention and actual usage) of a computer-based chronic disease self-monitoring system among patients with type 2 diabetes mellitus and/or hypertension. The model was tested using partial least squares structural equation modeling, with 119 observations that were obtained by pooling data across three time points over a 12-week period. The results indicate that all of the seven constructs examined had a significant total effect on behavioral intention and explained 74 percent of the variance. Also, application-specific self-efficacy and behavioral intention had a significant total effect on actual usage and explained 17 percent of the variance. This study demonstrates that technology acceptance is determined by patient characteristics, technology attributes, and social influences. Applying the findings may increase the likelihood of acceptance.
Tyler, Kimberly A; Gervais, Sarah J; Davidson, M Meghan
2013-02-01
Each year, thousands of female adolescents run away from home due to sexual abuse, yet they continue to be victims of sexual assault once on the street. To date, few studies have examined how various forms of victimization are related to different types of substance use. The purpose of this article is to investigate the relationship between street exposure, childhood abuse, and different forms of street victimization with alcohol and marijuana use among 137 homeless and runaway female adolescents. Results from path analysis revealed that child sexual abuse was positively linked to trading sex and sexual and physical victimization. In addition, those who have traded sex experienced greater physical victimization, and who have spent more time away from home, used alcohol more frequently. Moreover, trading sex and experiencing more types of sexual victimization were positively linked to more frequent marijuana usage. Age, age at first run, longest time away from home, sexual abuse, and trading sex had significant indirect effects on alcohol and/or marijuana use. Together, these factors accounted for 27% of the variance in alcohol use and 37% of the variance in marijuana use.
Szymańska, Anna; Szymański, Marcin; Czekajska-Chehab, Elżbieta; Szczerbo-Trojanowska, Małgorzata
2015-01-01
Juvenile nasopharyngeal angiofibroma is a benign, locally aggressive nasopharyngeal tumor. Apart from anterior lateral extension to the pterygopalatine fossa, it may spread laterally posterior to the pterygoid process, showing posterior lateral growth pattern, which is less common and more difficult to identify during surgery. We analyzed the routes of lateral spread, modalities useful in its diagnosis, the incidence of lateral extension and its influence on outcomes of surgical treatment. The records of 37 patients with laterally extending JNA treated at our institution between 1987 and 2011 were retrospectively evaluated. Computed tomography was performed in all patients and magnetic resonance imaging in 17 (46 %) patients. CT and MRI were evaluated to determine routes and extension of JNA lateral spread. Anterior lateral extension to the pterygopalatine fossa occurred in 36 (97 %) patients and further to the infratemporal fossa in 20 (54 %) patients. In 16 (43 %) cases posterior lateral spread was observed: posterior to the pterygoid process and/or between its plates. The recurrence rate was 29.7 % (11/37). The majority of residual lesions was located behind the pterygoid process (7/11). Recurrent disease occurred in 3/21 patients with anterior lateral extension, in 7/15 patients with both types of lateral extensions and in 1 patient with posterior lateral extension. JNA posterior lateral extension may spread behind the pterygoid process or between its plates. The recurrence rate in patients with anterior and/or posterior lateral extension is significantly higher than in patients with anterior lateral extension only. Both CT and MRI allow identification of the anterior and posterior lateral extensions.
Group sequential designs for stepped-wedge cluster randomised trials
Grayling, Michael J; Wason, James MS; Mander, Adrian P
2017-01-01
Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial. PMID:28653550
Acclimation and Institutionalization of the Mouse Microbiota Following Transportation
Montonye, Dan R.; Ericsson, Aaron C.; Busi, Susheel B.; Lutz, Cathleen; Wardwell, Keegan; Franklin, Craig L.
2018-01-01
Using animal models, the gut microbiota has been shown to play a critical role in the health and disease of many organ systems. Unfortunately, animal model studies often lack reproducibility when performed at different institutions. Previous studies in our laboratory have shown that the gut microbiota of mice can vary with a number of husbandry factors leading us to speculate that differing environments may alter gut microbiota, which in turn may influence animal model phenotypes. As an extension of these studies, we hypothesized that the shipping of mice from a mouse producer to an institution will result in changes in the type, relative abundance, and functional composition of the gut microbiota. Furthermore, we hypothesized that mice will develop a microbiota unique to the institution and facility in which they are housed. To test these hypotheses, mice of two strains (C57BL/6J and BALB/cJ), two age groups (4 week and 8 week old), and originating from two types of housing (research animal facility under conventional housing and production facilities under maximum barrier housing) were obtained from The Jackson Laboratory. Fecal samples were collected the day prior to shipping, immediately upon arrival, and then on days 2, 5, 7, and weeks 2, 4, and 9 post-arrival. Following the first post-arrival fecal collection, mice were separated into 2 groups and housed at different facilities at our institution while keeping their caging, diet, and husbandry practices the same. DNA was extracted from the collected fecal pellets and 16S rRNA amplicons were sequenced in order to characterize the type and relative abundance of gut bacteria. Principal component analysis (PCA) and permutational multivariate analysis of variance (PERMANOVA) demonstrated that both the shipping and the institution and facility in which mice were housed altered the gut microbiota. Phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt) predicted differences in functional composition in the gut microbiota of mice based on time of acclimation. PMID:29892276
Acclimation and Institutionalization of the Mouse Microbiota Following Transportation.
Montonye, Dan R; Ericsson, Aaron C; Busi, Susheel B; Lutz, Cathleen; Wardwell, Keegan; Franklin, Craig L
2018-01-01
Using animal models, the gut microbiota has been shown to play a critical role in the health and disease of many organ systems. Unfortunately, animal model studies often lack reproducibility when performed at different institutions. Previous studies in our laboratory have shown that the gut microbiota of mice can vary with a number of husbandry factors leading us to speculate that differing environments may alter gut microbiota, which in turn may influence animal model phenotypes. As an extension of these studies, we hypothesized that the shipping of mice from a mouse producer to an institution will result in changes in the type, relative abundance, and functional composition of the gut microbiota. Furthermore, we hypothesized that mice will develop a microbiota unique to the institution and facility in which they are housed. To test these hypotheses, mice of two strains (C57BL/6J and BALB/cJ), two age groups (4 week and 8 week old), and originating from two types of housing (research animal facility under conventional housing and production facilities under maximum barrier housing) were obtained from The Jackson Laboratory. Fecal samples were collected the day prior to shipping, immediately upon arrival, and then on days 2, 5, 7, and weeks 2, 4, and 9 post-arrival. Following the first post-arrival fecal collection, mice were separated into 2 groups and housed at different facilities at our institution while keeping their caging, diet, and husbandry practices the same. DNA was extracted from the collected fecal pellets and 16S rRNA amplicons were sequenced in order to characterize the type and relative abundance of gut bacteria. Principal component analysis (PCA) and permutational multivariate analysis of variance (PERMANOVA) demonstrated that both the shipping and the institution and facility in which mice were housed altered the gut microbiota. Phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt) predicted differences in functional composition in the gut microbiota of mice based on time of acclimation.
Group sequential designs for stepped-wedge cluster randomised trials.
Grayling, Michael J; Wason, James Ms; Mander, Adrian P
2017-10-01
The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.
On the application of under-decimated filter banks
NASA Technical Reports Server (NTRS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-01-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.
On the application of under-decimated filter banks
NASA Astrophysics Data System (ADS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-11-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, So-Yeon; Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 110-744; Biomedical Research Institute, Seoul National University College of Medicine, Seoul 110-744
Purpose: Texture analysis on fluence maps was performed to evaluate the degree of modulation for volumetric modulated arc therapy (VMAT) plans. Methods: A total of six textural features including angular second moment, inverse difference moment, contrast, variance, correlation, and entropy were calculated for fluence maps generated from 20 prostate and 20 head and neck VMAT plans. For each of the textural features, particular displacement distances (d) of 1, 5, and 10 were adopted. To investigate the deliverability of each VMAT plan, gamma passing rates of pretreatment quality assurance, and differences in modulating parameters such as multileaf collimator (MLC) positions, gantrymore » angles, and monitor units at each control point between VMAT plans and dynamic log files registered by the Linac control system during delivery were acquired. Furthermore, differences between the original VMAT plan and the plan reconstructed from the dynamic log files were also investigated. To test the performance of the textural features as indicators for the modulation degree of VMAT plans, Spearman’s rank correlation coefficients (r{sub s}) with the plan deliverability were calculated. For comparison purposes, conventional modulation indices for VMAT including the modulation complexity score for VMAT, leaf travel modulation complexity score, and modulation index supporting station parameter optimized radiation therapy (MI{sub SPORT}) were calculated, and their correlations were analyzed in the same way. Results: There was no particular textural feature which always showed superior correlations with every type of plan deliverability. Considering the results comprehensively, contrast (d = 1) and variance (d = 1) generally showed considerable correlations with every type of plan deliverability. These textural features always showed higher correlations to the plan deliverability than did the conventional modulation indices, except in the case of modulating parameter differences. The r{sub s} values of contrast to the global gamma passing rates with criteria of 2%/2 mm, 2%/1 mm, and 1%/2 mm were 0.536, 0.473, and 0.718, respectively. The respective values for variance were 0.551, 0.481, and 0.688. In the case of local gamma passing rates, the r{sub s} values of contrast were 0.547, 0.578, and 0.620, respectively, and those of variance were 0.519, 0.527, and 0.569. All of the r{sub s} values in those cases were statistically significant (p < 0.003). In the cases of global and local gamma passing rates, MI{sub SPORT} showed the highest correlations among the conventional modulation indices. For global passing rates, r{sub s} values of MI{sub SPORT} were −0.420, −0.330, and −0.632, respectively, and those for local passing rates were −0.455, −0.490 and −0.502. The values of r{sub s} of contrast, variance, and MI{sub SPORT} with the MLC errors were −0.863, −0.828, and 0.795, respectively, all with statistical significances (p < 0.001). The correlations with statistical significances between variance and dose-volumetric differences were observed more frequently than the others. Conclusions: The contrast (d = 1) and variance (d = 1) calculated from fluence maps of VMAT plans showed considerable correlations with the plan deliverability, indicating their potential use as indicators for assessing the degree of modulation of VMAT plans. Both contrast and variance consistently showed better performance than the conventional modulation indices for VMAT.« less
Testing a cognitive model to predict posttraumatic stress disorder following childbirth.
King, Lydia; McKenzie-McHarg, Kirstie; Horsch, Antje
2017-01-14
One third of women describes their childbirth as traumatic and between 0.8 and 6.9% goes on to develop posttraumatic stress disorder (PTSD). The cognitive model of PTSD has been shown to be applicable to a range of trauma samples. However, childbirth is qualitatively different to other trauma types and special consideration needs to be taken when applying it to this population. Previous studies have investigated some cognitive variables in isolation but no study has so far looked at all the key processes described in the cognitive model. This study therefore aimed to investigate whether theoretically-derived variables of the cognitive model explain unique variance in postnatal PTSD symptoms when key demographic, obstetric and clinical risk factors are controlled for. One-hundred and fifty-seven women who were between 1 and 12 months post-partum (M = 6.5 months) completed validated questionnaires assessing PTSD and depressive symptoms, childbirth experience, postnatal social support, trauma memory, peritraumatic processing, negative appraisals, dysfunctional cognitive and behavioural strategies and obstetric as well as demographic risk factors in an online survey. A PTSD screening questionnaire suggested that 5.7% of the sample might fulfil diagnostic criteria for PTSD. Overall, risk factors alone predicted 43% of variance in PTSD symptoms and cognitive behavioural factors alone predicted 72.7%. A final model including both risk factors and cognitive behavioural factors explained 73.7% of the variance in PTSD symptoms, 37.1% of which was unique variance predicted by cognitive factors. All variables derived from Ehlers and Clark's cognitive model significantly explained variance in PTSD symptoms following childbirth, even when clinical, demographic and obstetric were controlled for. Our findings suggest that the CBT model is applicable and useful as a way of understanding and informing the treatment of PTSD following childbirth.
Estimation of stable boundary-layer height using variance processing of backscatter lidar data
NASA Astrophysics Data System (ADS)
Saeed, Umar; Rocadenbosch, Francesc
2017-04-01
Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.
NASA Astrophysics Data System (ADS)
Yu, Daren; Meng, Tianhang; Ning, Zhongxi; Liu, Hui
2017-04-01
A magnetic focusing type Hall thruster was designed with a cylindrical magnetic seperatrix. During the process of a hollow cathode crossing the separatrix, the variance of plume parameter distribution was monitored. Results show that the ion flux on the large spatial angle is significantly lower when the hollow cathode is located in the inner magnetic field. This convergence effect is preserved even in a distant area. A mechanism was proposed for plume divergence from the perspective of cathode-to-plume potential difference, through which the confinement effect of cylindrical-separatrix-type magnetic field on thruster plume was confirmed and proposed as a means of plume protection for plasma propulsion devices.
Cho, C. I.; Alam, M.; Choi, T. J.; Choy, Y. H.; Choi, J. G.; Lee, S. S.; Cho, K. H.
2016-01-01
The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3–L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea. PMID:26954184
Cho, C I; Alam, M; Choi, T J; Choy, Y H; Choi, J G; Lee, S S; Cho, K H
2016-05-01
The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3-L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea.
Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul
2016-01-15
Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease. Copyright © 2015 Elsevier Inc. All rights reserved.
Pereira, Tiago V; Mingroni-Netto, Regina C
2011-06-06
The generalized odds ratio (GOR) was recently suggested as a genetic model-free measure for association studies. However, its properties were not extensively investigated. We used Monte Carlo simulations to investigate type-I error rates, power and bias in both effect size and between-study variance estimates of meta-analyses using the GOR as a summary effect, and compared these results to those obtained by usual approaches of model specification. We further applied the GOR in a real meta-analysis of three genome-wide association studies in Alzheimer's disease. For bi-allelic polymorphisms, the GOR performs virtually identical to a standard multiplicative model of analysis (e.g. per-allele odds ratio) for variants acting multiplicatively, but augments slightly the power to detect variants with a dominant mode of action, while reducing the probability to detect recessive variants. Although there were differences among the GOR and usual approaches in terms of bias and type-I error rates, both simulation- and real data-based results provided little indication that these differences will be substantial in practice for meta-analyses involving bi-allelic polymorphisms. However, the use of the GOR may be slightly more powerful for the synthesis of data from tri-allelic variants, particularly when susceptibility alleles are less common in the populations (≤10%). This gain in power may depend on knowledge of the direction of the effects. For the synthesis of data from bi-allelic variants, the GOR may be regarded as a multiplicative-like model of analysis. The use of the GOR may be slightly more powerful in the tri-allelic case, particularly when susceptibility alleles are less common in the populations.
Debray, Thomas P A; Moons, Karel G M; Riley, Richard D
2018-03-01
Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size, or the inverse of the total sample size. In this paper, we demonstrate that the application of these tests may be less appropriate in meta-analysis of survival data, where censoring influences statistical significance of the hazard ratio. We subsequently propose 2 new tests that are based on the total number of observed events and adopt a multiplicative variance component. We compare the performance of the various funnel plot asymmetry tests in an extensive simulation study where we varied the true hazard ratio (0.5 to 1), the number of published trials (N=10 to 100), the degree of censoring within trials (0% to 90%), and the mechanism leading to participant dropout (noninformative versus informative). Results demonstrate that previous well-known tests for detecting funnel plot asymmetry suffer from low power or excessive type-I error rates in meta-analysis of survival data, particularly when trials are affected by participant dropout. Because our novel test (adopting estimates of the asymptotic precision as study weights) yields reasonable power and maintains appropriate type-I error rates, we recommend its use to evaluate funnel plot asymmetry in meta-analysis of survival data. The use of funnel plot asymmetry tests should, however, be avoided when there are few trials available for any meta-analysis. © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons, Ltd.
Time dependence of solid-particle impingement erosion of an aluminum alloy
NASA Technical Reports Server (NTRS)
Veerabhadrarao, P.; Buckley, D. H.
1983-01-01
Erosion studies were conducted on 6061-T6511 aluminum alloy by using jet impingement of glass beads and crushed glass particles to investigate the influence of exposure time on volume loss rate at different pressures. The results indicate a direct relationship between erosion-versus-time curves and pitmorphology (width, depth, and width-depth ratio)-versus-time curves for both glass forms. Extensive erosion data from the literature were analyzed to find the variations of erosion-rate-versus-time curves with respect to the type of device, the size and shape of erodent particles, the abrasive charge, the impact velocity, etc. Analysis of the experimental data, obtained with two forms of glass, resulted in three types of erosion-rate-versus-time curves: (1) curves with incubation, acceleration, and steadystate periods (type 1); (2) curves with incubation, acceleration, decleration, and steady-state periods (type 3); and (3) curves with incubation, acceleration, peak rate, and deceleration periods (type 4). The type 4 curve is a less frequently seen curve and was not reported in the literature. Analysis of extensive literature data generally indicated three types of erosion-rate-versus-time curves. Two types (types 1 and 3) were observed in the present study; the third type involves incubation (and deposition), acceleration, and steady-state periods (type 2). Examination of the extensive literature data indicated that it is absolutely necessary to consider the corresponding stages or periods of erosion in correlating and characterizing erosion resistance of a wide spectrum of ductile materials.
Improving the Impact of Extension through the Use of Anticipation Guides
ERIC Educational Resources Information Center
Smith, Rebecca C.; Lemley, Stephanie M.
2017-01-01
In this article, we present the anticipation guide as a tool for preparing Extension audiences to learn the main points of Extension materials. Anticipation guides improve learner comprehension by appealing to an individual's natural curiosity and helping the individual focus on key ideas. Anticipation guides can be used with all types of…
NASA Astrophysics Data System (ADS)
Bukoski, Alex; Steyn-Ross, D. A.; Pickett, Ashley F.; Steyn-Ross, Moira L.
2018-06-01
The dynamics of a stochastic type-I Hodgkin-Huxley-like point neuron model exposed to inhibitory synaptic noise are investigated as a function of distance from spiking threshold and the inhibitory influence of the general anesthetic agent propofol. The model is biologically motivated and includes the effects of intrinsic ion-channel noise via a stochastic differential equation description as well as inhibitory synaptic noise modeled as multiple Poisson-distributed impulse trains with saturating response functions. The effect of propofol on these synapses is incorporated through this drug's principal influence on fast inhibitory neurotransmission mediated by γ -aminobutyric acid (GABA) type-A receptors via reduction of the synaptic response decay rate. As the neuron model approaches spiking threshold from below, we track membrane voltage fluctuation statistics of numerically simulated stochastic trajectories. We find that for a given distance from spiking threshold, increasing the magnitude of anesthetic-induced inhibition is associated with augmented signatures of critical slowing: fluctuation amplitudes and correlation times grow as spectral power is increasingly focused at 0 Hz. Furthermore, as a function of distance from threshold, anesthesia significantly modifies the power-law exponents for variance and correlation time divergences observable in stochastic trajectories. Compared to the inverse square root power-law scaling of these quantities anticipated for the saddle-node bifurcation of type-I neurons in the absence of anesthesia, increasing anesthetic-induced inhibition results in an observable exponent <-0.5 for variance and >-0.5 for correlation time divergences. However, these behaviors eventually break down as distance from threshold goes to zero with both the variance and correlation time converging to common values independent of anesthesia. Compared to the case of no synaptic input, linearization of an approximating multivariate Ornstein-Uhlenbeck model reveals these effects to be the consequence of an additional slow eigenvalue associated with synaptic activity that competes with those of the underlying point neuron in a manner that depends on distance from spiking threshold.
Plotnikoff, Ronald C; Lubans, David R; Penfold, Chris M; Courneya, Kerry S
2014-05-01
Theory-based interventions to promote physical activity (PA) are more effective than atheoretical approaches; however, the comparative utility of theoretical models is rarely tested in longitudinal designs with multiple time points. Further, there is limited research that has simultaneously tested social-cognitive models with self-report and objective PA measures. The primary aim of this study was to test the predictive ability of three theoretical models (social cognitive theory, theory of planned behaviour, and protection motivation theory) in explaining PA behaviour. Participants were adults with type 2 diabetes (n = 287, 53.8% males, mean age = 61.6 ± 11.8 years). Theoretical constructs across the three theories were tested to prospectively predict PA behaviour (objective and self-report) across three 6-month time intervals (baseline-6, 6-12, 12-18 months) using structural equation modelling. PA outcomes were steps/3 days (objective) and minutes of MET-weighted PA/week (self-report). The mean proportion of variance in PA explained by these models was 6.5% for objective PA and 8.8% for self-report PA. Direct pathways to PA outcomes were stronger for self-report compared with objective PA. These theories explained a small proportion of the variance in longitudinal PA studies. Theory development to guide interventions for increasing and maintaining PA in adults with type 2 diabetes requires further research with objective measures. Theory integration across social-cognitive models and the inclusion of ecological levels are recommended to further explain PA behaviour change in this population. Statement of contribution What is already known on this subject? Social-cognitive theories are able to explain partial variance for physical activity (PA) behaviour. What does this study add? The testing of three theories in a longitudinal design over 3, 6-month time intervals. The parallel use and comparison of both objective and self-report PA measures in testing these theories. © 2013 The British Psychological Society.
Sauce, Bruno; Wass, Christopher; Smith, Andrew; Kwan, Stephanie; Matzel, Louis D.
2016-01-01
Attention is a component of the working memory system, and as such, is responsible for protecting task-relevant information from interference. Cognitive performance (particularly outside of the laboratory) is often plagued by interference, and the source of this interference, either external or internal, might influence the expression of individual differences in attentional ability. By definition, external attention (also described as “selective attention”) protects working memory against sensorial distractors of all kinds, while internal attention (also called “inhibition”) protects working memory against emotional impulses, irrelevant information from memory, and automatically-generated responses. At present, it is unclear if these two types of attention are expressed independently in non-human animals, and how they might differentially impact performance on other cognitive processes, such as learning. By using a diverse battery of four attention tests (with varying levels of internal and external sources of interference), here we aimed both to explore this issue, and to obtain a robust and general (less task-specific) measure of attention in mice. Exploratory factor analyses revealed two factors (external and internal attention) that in total, accounted for 73% of the variance in attentional performance. Confirmatory factor analyses found an excellent fit with the data of the model of attention that assumed an external and internal distinction (with a resulting correlation of 0.43). In contrast, a model of attention that assumed one source of variance (i.e., “general attention”) exhibited a poor fit with the data. Regarding the relationship between attention and learning, higher resistance against external sources of interference promoted better new learning, but tended to impair performance when cognitive flexibility was required, such as during the reversal of a previously instantiated response. The present results suggest that there can be (at least) two types of attention that contribute to the common variance in attentional performance in mice, and that external and internal attentions might have opposing influences on the rate at which animals learn. PMID:25452087
Messé, Arnaud; Rudrauf, David; Benali, Habib; Marrelec, Guillaume
2014-01-01
Investigating the relationship between brain structure and function is a central endeavor for neuroscience research. Yet, the mechanisms shaping this relationship largely remain to be elucidated and are highly debated. In particular, the existence and relative contributions of anatomical constraints and dynamical physiological mechanisms of different types remain to be established. We addressed this issue by systematically comparing functional connectivity (FC) from resting-state functional magnetic resonance imaging data with simulations from increasingly complex computational models, and by manipulating anatomical connectivity obtained from fiber tractography based on diffusion-weighted imaging. We hypothesized that FC reflects the interplay of at least three types of components: (i) a backbone of anatomical connectivity, (ii) a stationary dynamical regime directly driven by the underlying anatomy, and (iii) other stationary and non-stationary dynamics not directly related to the anatomy. We showed that anatomical connectivity alone accounts for up to 15% of FC variance; that there is a stationary regime accounting for up to an additional 20% of variance and that this regime can be associated to a stationary FC; that a simple stationary model of FC better explains FC than more complex models; and that there is a large remaining variance (around 65%), which must contain the non-stationarities of FC evidenced in the literature. We also show that homotopic connections across cerebral hemispheres, which are typically improperly estimated, play a strong role in shaping all aspects of FC, notably indirect connections and the topographic organization of brain networks. PMID:24651524
NASA Astrophysics Data System (ADS)
Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha
2017-11-01
Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.
Temperament and job stress in Japanese company employees.
Sakai, Y; Akiyama, T; Miyake, Y; Kawamura, Y; Tsuda, H; Kurabayashi, L; Tominaga, M; Noda, T; Akiskal, K; Akiskal, H
2005-03-01
This study aims to demonstrate the relevance of temperament to job stress. The subjects were 848 male and 366 female Japanese company employees. Temperament Evaluation of Memphis, Pisa, Paris and San Diego-Autoquestionnaire version (TEMPS-A) and Munich Personality Test (MPT) were administered to assess temperaments, and the NIOSH Generic Job Stress Questionnaire (GJSQ) to assess job stress. We used hierarchical multiple linear regression analysis in order to demonstrate whether temperament variables added any unique variance after controlling the effects of other predictors such as gender, age and job rank. In all subscales of the GJSQ, temperament predicted a large share of the variance in job stress. Remarkably, for interpersonal relationship stressors, the temperament variables added greater variance than that predicted by gender, age and job rank. Summary of the hierarchical linear regression analysis showed that the irritable temperament was associated with the most prominent vulnerability, followed by cyclothymic and anxious temperaments. The schizoid temperament had difficulty in the area of social support. On the other hand, the hyperthymic temperament displayed significant robustness in facing most job stressors; the melancholic type showed a similar pattern to a lesser degree. The findings may be different in a clinical Japanese sample, or a cohort of healthy employees from a different cultural background. Temperament influences job stress significantly-indeed, it impacts on such stress with greater magnitude than age, gender and job rank in most areas examined. Temperament influences interpersonal relationship stressors more than workload-related stressors. Interestingly, in line with previous clinical and theoretical formulations, the hyperthymic and melancholic types actually appear to be "hyper-adapted" to the workplace.
Dartora, Nereu Roque; de Conto Ferreira, Michele Bertoluzi; Moris, Izabela Cristina Maurício; Brazão, Elisabeth Helena; Spazin, Aloísio Oro; Sousa-Neto, Manoel Damião; Silva-Sousa, Yara Terezinha; Gomes, Erica Alves
2018-07-01
Endodontically treated teeth have an increased risk of biomechanical failure because of significant loss of tooth structure. The biomechanical behavior of endodontically treated teeth restored was evaluated using different extensions of endocrowns inside the pulp chamber by in vitro and 3-dimensional finite element analysis (FEA). Thirty mandibular human molars were endodontically treated. Standardized endocrown preparations were performed, and the teeth were randomly divided into 3 groups (n = 10) according to different endocrown extensions inside the pulp chamber: G-5 mm, a 5-mm extension; G-3 mm, a 3-mm extension; and G-1 mm, a 1-mm extension. After adhesive cementation, all specimens were subjected to thermocycling and dynamic loading. The survival specimens were subjected to fracture resistance testing at a crosshead speed of 1 mm/min in a universal testing machine. All fractured specimens were subjected to fractography. Data were analyzed by 1-way analysis of variance and the Tukey post hoc test (P < .05). Stress distribution patterns in each group were analyzed using FEA. Qualitative analyses were performed according to the von Mises criterion. After dynamic loading, a survival rate of 100% was observed in all groups. For static loading, statistically significant differences among the groups were observed (P < .05) (G-5 mm = 2008.61 N, G-3 mm = 1795.41 N, and G-1 mm = 1268.12 N). Fractography showed a higher frequency of compression curls for G-5 mm and G-3 mm than for G-1 mm. FEA explained the results of fracture strength testing and fractography. Greater extension of endocrowns inside the pulp chamber provided better mechanical performance. Copyright © 2018 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Blood Type 0 is not associated with increased blood loss in extensive spine surgery✩
Komatsu, Ryu; Dalton, Jarrod E.; Ghobrial, Michael; Fu, Alexander Y.; Lee, Jae H.; Egan, Cameron; Sessler, Daniel I.; Kasuya, Yusuke; Turan, Alparslan
2016-01-01
Study Objective To investigate whether Type O blood group status is associated with increased intraoperative blood loss and requirement of blood transfusion in extensive spine surgery. Design Retrospective comparative study. Setting University-affiliated, non-profit teaching hospital. Measurements Data from 1,050 ASA physical status 1, 2, 3, 4, and 5 patients who underwent spine surgeries involving 4 or more vertebral levels were analyzed. Patients with Type O blood were matched to similar patients with other blood types using propensity scores, which were estimated via demographic and morphometric data, medical history variables, and extent of surgery. Intraoperative estimated blood loss (EBL) was compared among matched patients using a linear regression model; intraoperative transfusion requirement in volume of red blood cells, fresh frozen plasma, platelet, cryoprecipitate, cell salvaged blood, volume of intraoperative infusion of hetastarch, 5% albumin, crystalloids, and hospital length of hospital (LOS) were compared using Wilcoxon rank-sum tests. Main Results Intraoperative EBL and requirement of blood product transfusion were similar in patients with Type O blood group and those with other blood groups. Conclusion There was no association between Type O blood and increased intraoperative blood loss or blood transfusion requirement during extensive spine surgery, with similar hospital LOS in Type O and non-O patients. PMID:25172503
Imaging of karsts on buried carbonate platform in Central Luconia Province, Malaysia
NASA Astrophysics Data System (ADS)
Nur Fathiyah Jamaludin, Siti; Mubin, Mukhriz; Latiff, Abdul Halim Abdul
2017-10-01
Imaging of carbonate rocks in the subsurface through seismic method is always challenging due to its heterogeneity and fast velocity compared to the other rock types. Existence of karsts features on the carbonate rocks make it more complicated to interpret the reflectors. Utilization of modern interpretation software such as PETREL and GeoTeric® to image the karsts morphology make it possible to model the karst network within the buried carbonate platform used in this study. Using combination of different seismic attributes such as Variance, Conformance, Continuity, Amplitude, Frequency and Edge attributes, we are able to image the karsts features that are available in the proven gas-field in Central Luconia Province, Malaysia. The mentioned attributes are excellent in visualize and image the stratigraphic features based on the difference in their acoustic impedance as well as structural features, which include karst. 2D & 3D Karst Models were developed to give a better understanding on the characteristics of the identified karsts. From the models, it is found that the karsts are concentrated in the top part of the carbonate reservoir (epikarst) and the middle layer with some of them becomes extensive and create karst networks, either laterally or vertically. Most of the vertical network karst are related to the existence of faults that displaced all the horizons in the carbonate platform.
NASA Astrophysics Data System (ADS)
Yang, Xiyan; Wu, Yahao; Yuan, Zhanjiang
2015-06-01
Two-component signaling modules exist extensively in bacteria and microbes. These modules can be, based on their distinct network structures, divided into two types: the monofunctional system (denoted by MFS) where the sensor kinase (SK) modulates only phosphorylation of the response regulator (RR), and the bifunctional system (denoted by BFS) where the SK catalyzes both phosphorylation and dephosphorylation of the RR. Here, we analyze dynamical behaviors of these two systems based on stability theory, focusing on differences between them. The analysis of the deterministic behavior indicates that there is no difference between the two modules, that is, each system has the unique stable steady state. However, there are significant differences in stochastic behavior between them. Specifically, if the mean phosphorylated SK level is kept the same for the two modules, then the variance and the Fano factor for the phosphorylated RR in the BFS are always no less than those in the MFS, indicating that bifunctionality always enhances fluctuations. The correlation between the phosphorylated SK and the phosphorylated RR in the BFS is always positive mainly due to competition between system components, but this correlation in the MFS may be positive, almost zero, or negative, depending on the ratio between two rate constants. Our overall analysis indicates that differences between dynamical behaviors of monofunctional and bifunctional signaling modules are mainly in the stochastic rather than deterministic aspect.
Verma, Shefali S; Lucas, Anastasia M; Lavage, Daniel R; Leader, Joseph B; Metpally, Raghu; Krishnamurthy, Sarathbabu; Dewey, Frederick; Borecki, Ingrid; Lopez, Alexander; Overton, John; Penn, John; Reid, Jeffrey; Pendergrass, Sarah A; Breitwieser, Gerda; Ritchie, Marylyn D
2017-01-01
A wide range of patient health data is recorded in Electronic Health Records (EHR). This data includes diagnosis, surgical procedures, clinical laboratory measurements, and medication information. Together this information reflects the patient's medical history. Many studies have efficiently used this data from the EHR to find associations that are clinically relevant, either by utilizing International Classification of Diseases, version 9 (ICD-9) codes or laboratory measurements, or by designing phenotype algorithms to extract case and control status with accuracy from the EHR. Here we developed a strategy to utilize longitudinal quantitative trait data from the EHR at Geisinger Health System focusing on outpatient metabolic and complete blood panel data as a starting point. Comprehensive Metabolic Panel (CMP) as well as Complete Blood Counts (CBC) are parts of routine care and provide a comprehensive picture from high level screening of patients' overall health and disease. We randomly split our data into two datasets to allow for discovery and replication. We first conducted a genome-wide association study (GWAS) with median values of 25 different clinical laboratory measurements to identify variants from Human Omni Express Exome beadchip data that are associated with these measurements. We identified 687 variants that associated and replicated with the tested clinical measurements at p<5×10-08. Since longitudinal data from the EHR provides a record of a patient's medical history, we utilized this information to further investigate the ICD-9 codes that might be associated with differences in variability of the measurements in the longitudinal dataset. We identified low and high variance patients by looking at changes within their individual longitudinal EHR laboratory results for each of the 25 clinical lab values (thus creating 50 groups - a high variance and a low variance for each lab variable). We then performed a PheWAS analysis with ICD-9 diagnosis codes, separately in the high variance group and the low variance group for each lab variable. We found 717 PheWAS associations that replicated at a p-value less than 0.001. Next, we evaluated the results of this study by comparing the association results between the high and low variance groups. For example, we found 39 SNPs (in multiple genes) associated with ICD-9 250.01 (Type-I diabetes) in patients with high variance of plasma glucose levels, but not in patients with low variance in plasma glucose levels. Another example is the association of 4 SNPs in UMOD with chronic kidney disease in patients with high variance for aspartate aminotransferase (discovery p-value: 8.71×10-09 and replication p-value: 2.03×10-06). In general, we see a pattern of many more statistically significant associations from patients with high variance in the quantitative lab variables, in comparison with the low variance group across all of the 25 laboratory measurements. This study is one of the first of its kind to utilize quantitative trait variance from longitudinal laboratory data to find associations among genetic variants and clinical phenotypes obtained from an EHR, integrating laboratory values and diagnosis codes to understand the genetic complexities of common diseases.