Sample records for average variance extracted

  1. Extraction and Inhibition of Enzymatic Activity of Botulinum Neurotoxins/A1, /A2, and /A3 by a Panel of Monoclonal Anti-BoNT/A Antibodies

    DTIC Science & Technology

    2009-04-01

    triplicate and results were averaged. MS Detection A master mix was created consisting of 9 parts matrix solution (alpha-cyano-4-hydroxy cinnamic acid ...thus, do not inhibit the catalytic activity. Another feature of BoNT/A is that it exhibits genetic and amino acid variance within the toxin type, or...less amino acid variance [23] and this variance has been reported to affect binding of the toxin to anti-BoNT/A mAbs [24]. For these reasons, it is

  2. Comparison of small diameter stone baskets in an in vitro caliceal and ureteral model.

    PubMed

    Korman, Emily; Hendlin, Kari; Chotikawanich, Ekkarin; Monga, Manoj

    2011-01-01

    Three small diameter (<1.5F) stone baskets have recently been introduced. Our objective was to evaluate the stone capture rate of these baskets in an in vitro ureteral model and an in vitro caliceal model using novice, resident, and expert operators. Sacred Heart Medical Halo™ (1.5F), Cook N-Circle(®) Nitinol Tipless Stone Extractor (1.5F), and Boston Scientific OptiFlex(®) (1.3F) stone baskets were tested in an in vitro ureteral and a caliceal model by three novices, three residents, and three experts. The caliceal model consisted of a 7-cm length of 10-mm O.D. plastic tubing with a convex base. Each operator was timed during removal of a 3-mm calculus from each model with three repetitions for each basket. Data were analyzed by analysis of variance single factor tests and t tests assuming unequal variances. In the ureteral model, the Halo had the fastest average rate of stone extraction for experts and novices (0:02 ± 0:01 and 0:08 ± 0:04 min, respectively), as well as the overall fastest average stone extraction rate (0:08 ± 0:06 min). No statistical significant differences in extraction times between baskets were identified in the resident group. In the novice group, the Halo stone extraction rate was significantly faster than the OptiFlex (P=0.029). In the expert group, the OptiFlex had statistically significant slower average extraction rates compared with the Halo (P=0.005) and the N-Circle (P=0.017). In the caliceal model, no statistically significant differences were noted. While no significant differences were noted in extraction times for the caliceal model, the extraction times for the ureteral model were slowest with the OptiFlex basket. Other variables important in selection of the appropriate basket include operator preference, clinical setting, and cost.

  3. Psychometric Properties of the Death Anxiety Scale-Extended among Patients with End-Stage Renal Disease.

    PubMed

    Sharif Nia, Hamid; Pahlevan Sharif, Saeed; Koocher, Gerald P; Yaghoobzadeh, Ameneh; Haghdoost, Ali Akbar; Mar Win, Ma Thin; Soleimani, Mohammad Ali

    2017-01-01

    This study aimed to evaluate the validity and reliability of the Persian version of Death Anxiety Scale-Extended (DAS-E). A total of 507 patients with end-stage renal disease completed the DAS-E. The factor structure of the scale was evaluated using exploratory factor analysis with an oblique rotation and confirmatory factor analysis. The content and construct validity of the DAS-E were assessed. Average variance extracted, maximum shared squared variance, and average shared squared variance were estimated to assess discriminant and convergent validity. Reliability was assessed using Cronbach's alpha coefficient (α = .839 and .831), composite reliability (CR = .845 and .832), Theta (θ = .893 and .867), and McDonald Omega (Ω = .796 and .743). The analysis indicated a two-factor solution. Reliability and discriminant validity of the factors was established. Findings revealed that the present scale was a valid and reliable instrument that can be used in assessment of death anxiety in Iranian patients with end-stage renal disease.

  4. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  5. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  6. Seabed mapping and characterization of sediment variability using the usSEABED data base

    USGS Publications Warehouse

    Goff, J.A.; Jenkins, C.J.; Jeffress, Williams S.

    2008-01-01

    We present a methodology for statistical analysis of randomly located marine sediment point data, and apply it to the US continental shelf portions of usSEABED mean grain size records. The usSEABED database, like many modern, large environmental datasets, is heterogeneous and interdisciplinary. We statistically test the database as a source of mean grain size data, and from it provide a first examination of regional seafloor sediment variability across the entire US continental shelf. Data derived from laboratory analyses ("extracted") and from word-based descriptions ("parsed") are treated separately, and they are compared statistically and deterministically. Data records are selected for spatial analysis by their location within sample regions: polygonal areas defined in ArcGIS chosen by geography, water depth, and data sufficiency. We derive isotropic, binned semivariograms from the data, and invert these for estimates of noise variance, field variance, and decorrelation distance. The highly erratic nature of the semivariograms is a result both of the random locations of the data and of the high level of data uncertainty (noise). This decorrelates the data covariance matrix for the inversion, and largely prevents robust estimation of the fractal dimension. Our comparison of the extracted and parsed mean grain size data demonstrates important differences between the two. In particular, extracted measurements generally produce finer mean grain sizes, lower noise variance, and lower field variance than parsed values. Such relationships can be used to derive a regionally dependent conversion factor between the two. Our analysis of sample regions on the US continental shelf revealed considerable geographic variability in the estimated statistical parameters of field variance and decorrelation distance. Some regional relationships are evident, and overall there is a tendency for field variance to be higher where the average mean grain size is finer grained. Surprisingly, parsed and extracted noise magnitudes correlate with each other, which may indicate that some portion of the data variability that we identify as "noise" is caused by real grain size variability at very short scales. Our analyses demonstrate that by applying a bias-correction proxy, usSEABED data can be used to generate reliable interpolated maps of regional mean grain size and sediment character. 

  7. Effect of scene illumination conditions on digital enhancement techniques of multispectral scanner LANDSAT images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J.; Novo, E. M. L. M.

    1983-01-01

    Two sets of MSS/LANDSAT data with solar elevation ranging from 22 deg to 41 deg were used at the Image-100 System to implement the Eliason et alii technique for extracting the topographic modulation component. An unsupervised cluster analysis was used to obtain an average brightness image for each channel. Analysis of the enhanced imaged shows that the technique for extracting topographic modulation component is more appropriated to MSS data obtained under high sun elevation ngles. Low sun elevation increases the variance of each cluster so that the average brightness doesn't represent its albedo proprties. The topographic modulation component applied to low sun elevation angle damages rather than enhance topographic information. Better results were produced for channels 4 and 5 than for channels 6 and 7.

  8. Brillouin Frequency Shift of Fiber Distributed Sensors Extracted from Noisy Signals by Quadratic Fitting.

    PubMed

    Zheng, Hanrong; Fang, Zujie; Wang, Zhaoyong; Lu, Bin; Cao, Yulong; Ye, Qing; Qu, Ronghui; Cai, Haiwen

    2018-01-31

    It is a basic task in Brillouin distributed fiber sensors to extract the peak frequency of the scattering spectrum, since the peak frequency shift gives information on the fiber temperature and strain changes. Because of high-level noise, quadratic fitting is often used in the data processing. Formulas of the dependence of the minimum detectable Brillouin frequency shift (BFS) on the signal-to-noise ratio (SNR) and frequency step have been presented in publications, but in different expressions. A detailed deduction of new formulas of BFS variance and its average is given in this paper, showing especially their dependences on the data range used in fitting, including its length and its center respective to the real spectral peak. The theoretical analyses are experimentally verified. It is shown that the center of the data range has a direct impact on the accuracy of the extracted BFS. We propose and demonstrate an iterative fitting method to mitigate such effects and improve the accuracy of BFS measurement. The different expressions of BFS variances presented in previous papers are explained and discussed.

  9. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  10. Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.

    PubMed

    Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira

    2016-01-01

    Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.

  11. Improving Signal Detection using Allan and Theo Variances

    NASA Astrophysics Data System (ADS)

    Hardy, Andrew; Broering, Mark; Korsch, Wolfgang

    2017-09-01

    Precision measurements often deal with small signals buried within electronic noise. Extracting these signals can be enhanced through digital signal processing. Improving these techniques provide signal to noise ratios. Studies presently performed at the University of Kentucky are utilizing the electro-optic Kerr effect to understand cell charging effects within ultra-cold neutron storage cells. This work is relevant for the neutron electric dipole moment (nEDM) experiment at Oak Ridge National Laboratory. These investigations, and future investigations in general, will benefit from the illustrated improved analysis techniques. This project will showcase various methods for determining the optimum duration that data should be gathered for. Typically, extending the measuring time of an experimental run reduces the averaged noise. However, experiments also encounter drift due to fluctuations which mitigate the benefits of extended data gathering. Through comparing FFT averaging techniques, along with Allan and Theo variance measurements, quantifiable differences in signal detection will be presented. This research is supported by DOE Grants: DE-FG02-99ER411001, DE-AC05-00OR22725.

  12. The latitude dependence of the variance of zonally averaged quantities. [in polar meteorology with attention to geometrical effects of earth

    NASA Technical Reports Server (NTRS)

    North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.

    1982-01-01

    Geometric characteristics of the spherical earth are shown to be responsible for the increase of variance with latitude of zonally averaged meteorological statistics. An analytic model is constructed to display the effect of a spherical geometry on zonal averages, employing a sphere labeled with radial unit vectors in a real, stochastic field expanded in complex spherical harmonics. The variance of a zonally averaged field is found to be expressible in terms of the spectrum of the vector field of the spherical harmonics. A maximum variance is then located at the poles, and the ratio of the variance to the zonally averaged grid-point variance, weighted by the cosine of the latitude, yields the zonal correlation typical of the latitude. An example is provided for the 500 mb level in the Northern Hemisphere compared to 15 years of data. Variance is determined to increase north of 60 deg latitude.

  13. Ultrasound-assisted extraction of pectins from grape pomace using citric acid: a response surface methodology approach.

    PubMed

    Minjares-Fuentes, R; Femenia, A; Garau, M C; Meza-Velázquez, J A; Simal, S; Rosselló, C

    2014-06-15

    An ultrasound-assisted procedure for the extraction of pectins from grape pomace with citric acid as the extracting agent was established. A Box-Behnken design (BBD) was employed to optimize the extraction temperature (X1: 35-75°C), extraction time (X2: 20-60 min) and pH (X3: 1.0-2.0) to obtain a high yield of pectins with high average molecular weight (MW) and degree of esterification (DE) from grape pomace. Analysis of variance showed that the contribution of a quadratic model was significant for the pectin extraction yield and for pectin MW whereas the DE of pectins was more influenced by a linear model. An optimization study using response surface methodology was performed and 3D response surfaces were plotted from the mathematical model. According to the RSM model, the highest pectin yield (∼32.3%) can be achieved when the UAE process is carried out at 75°C for 60 min using a citric acid solution of pH 2.0. These pectic polysaccharides, composed mainly by galacturonic acid units (<97% of total sugars), have an average MW of 163.9 kDa and a DE of 55.2%. Close agreement between experimental and predicted values was found. These results suggest that ultrasound-assisted extraction could be a good option for the extraction of functional pectins with citric acid from grape pomace at industrial level. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. DREEM on: validation of the Dundee Ready Education Environment Measure in Pakistan.

    PubMed

    Khan, Junaid Sarfraz; Tabasum, Saima; Yousafzai, Usman Khalil; Fatima, Mehreen

    2011-09-01

    To validate DREEM in medical education environment of Punjab, Pakistan. The DREEM questionnaire was anonymously collected from Final year Baccalaureate of Medicine; Baccalaureate of Surgery students in the private and public medical colleges affiliated with the University of Health Sciences, Lahore. Data was analyzed using Principal Component Analysis with Varimax Rotation. The response rate was 84.14 %. The average DREEM score was 125. Confirmatory and Exploratory Factor Analysis was applied under the conditions of eigenvalues >1 and loadings > or = 0.3. In CONFIRMATORY FACTOR ANALYSIS, Five components were extracted accounting for 40.10% of variance and in EXPLORATORY FACTOR ANALYSIS, Ten components were extracted accounting for 52.33% of variance. Total 50 items had internal consistency reliability of 0.91 (Cronbach's Alpha). The value of Spearman-Brown was 0.868 showing the reliability of the analysis. In both analyses the subscales produced were sensible but the mismatch from the original was largely due to the English-Pakistan contextual and cultural differences. DREEM is a generic instrument that will do well with regional modifications to suit individual, contextual and cultural settings.

  15. Analysis of the variation in OCT measurements of a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head, PIMD-Average [02π

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2016-03-01

    The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can be observed with a power of 0.8. This is shorter than what has been observed both for HRT measurements and automated perimetry measurements with a similar observation rate. It is concluded that PIMDAverage [02π] has the potential to detect deterioration of glaucoma quicker than currently available primary diagnostic instruments. To increase the efficiency of PIMD-Average [02π] further, the variation among visits within subject has to be reduced.

  16. Creating a flipbook as a medium of instruction based on the research on activity test of kencur extract

    NASA Astrophysics Data System (ADS)

    Monika, Icha; Yeni, Laili Fitri; Ariyati, Eka

    2016-02-01

    This research aimed to reveal the validity of the flipbook as a medium of learning for the sub-material of environmental pollution in the tenth grade based on the results of the activity test of kencur (Kaempferia galanga) extract to control the growth of the Fusarium oxysporum fungus. The research consisted of two stages. First, testing the validity of the medium of flipbook through validation by seven assessors and analyzed based on the total average score of all aspects. Second, testing the activity of the kencur extract against the growth of Fusarium oxysporum by using the experimental method with 10 treatments and 3 repetitions which were analyzed using one-way analysis of variance (ANOVA) test. The making of the flipbook medium was done through the stages of analysis for the potential and problems, data collection, design, validation, and revision. The validation analysis on the flipbook received an average score of 3.7 and was valid to a certain extent, so it could be used in the teaching and learning process especially in the sub-material of environmental pollution in the tenth grade of the senior high school.

  17. Theoretical and simulated performance for a novel frequency estimation technique

    NASA Technical Reports Server (NTRS)

    Crozier, Stewart N.

    1993-01-01

    A low complexity, open-loop, discrete-time, delay-multiply-average (DMA) technique for estimating the frequency offset for digitally modulated MPSK signals is investigated. A nonlinearity is used to remove the MPSK modulation and generate the carrier component to be extracted. Theoretical and simulated performance results are presented and compared to the Cramer-Rao lower bound (CRLB) for the variance of the frequency estimation error. For all signal-to-noise ratios (SNR's) above threshold, it is shown that the CRLB can essentially be achieved with linear complexity.

  18. Is There a Common Summary Statistical Process for Representing the Mean and Variance? A Study Using Illustrations of Familiar Items.

    PubMed

    Yang, Yi; Tokita, Midori; Ishiguchi, Akira

    2018-01-01

    A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed.

  19. Procedure optimization for green synthesis of silver nanoparticles by aqueous extract of Eucalyptus oleosa.

    PubMed

    Pourmortazavi, Seied Mahdi; Taghdiri, Mehdi; Makari, Vajihe; Rahimi-Nasrabadi, Mehdi

    2015-02-05

    The present study is dealing with the green synthesis of silver nanoparticles using the aqueous extract of Eucalyptus oleosa as a green synthesis procedure without any catalyst, template or surfactant. Colloidal silver nanoparticles were synthesized by reacting aqueous AgNO3 with E. oleosa leaf extract at non-photomediated conditions. The significance of some synthesis conditions such as: silver nitrate concentration, concentration of the plant extract, time of synthesis reaction and temperature of plant extraction procedure on the particle size of synthesized silver particles was investigated and optimized. The participations of the studied factors in controlling the particle size of reduced silver were quantitatively evaluated via analysis of variance (ANOVA). The results of this investigation showed that silver nanoparticles could be synthesized by tuning significant parameters, while performing the synthesis procedure at optimum conditions leads to form silver nanoparticles with 21nm as averaged size. Ultraviolet-visible spectroscopy was used to monitor the development of silver nanoparticles formation. Meanwhile, produced silver nanoparticles were characterized by scanning electron microscopy, energy-dispersive X-ray, and FT-IR techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. In vivo test of bitter (andrographis paniculata nees.) extract to ejaculated sperm quality

    NASA Astrophysics Data System (ADS)

    Sumarmin, R.; Huda, NK; Yuniarti, E.; Violita

    2018-03-01

    Sambiloto or Bitter (Andrographis paniculata Nees.), are often used to treat various diseases, such as influenza, cancer, anti-inflammation, anti-HIV, anti-mitotic and anti-fertility. This study aimed to determine the effects of the bitter (Andrographis paniculata Nees.) extract to ejaculated sperm mice quality (Mus musculus L. Swiss Webster). This research was conducted using Completely Randomized Design with 4 treatments, which are 0.0 g/b.w., (P0), 0.2 g/b.w., (P1), 0,4 g/b.w., (P3), or 0.6 g/b.w., (P4) bitter extract orally for 36 days. After treatment, the mice decapitated, dissected and collected the sperm from vas deferens. Then, the number of sperm counted by used the improved Neubauer and then stained by Eosin to count the abnormal sperm. Data analyzed by ANOVA (Analysis of Variance) then DNMRT. The results showed that the average numbers of sperm are 28.80 x 105 (P0), 19.50 x 105 (P1), 12.50 x105 (P2) and 9.50 x 105 (P3). The average abnormal sperm numbers are 18.33 x 105 (P0), 22.50 x 105 (P1), 31.50 x105 (P2) and 39.33 x 105 (P3). It showed that the effective treatment to decrease sperm number was 0.2 g/b.w., of bitter extract. It can conclude that the bitter (Andrographis paniculata Nees.) extract decreases the quality of the ejaculated sperm of mice (Mus musculus L.)

  1. flowVS: channel-specific variance stabilization in flow cytometry.

    PubMed

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with different levels of marker expressions. The newly developed flowVS algorithm solves the variance-stabilization problem in FC and microarrays by optimally transforming data with the help of Bartlett's likelihood-ratio test. On two publicly available FC datasets, flowVS stabilizes within-population variances more evenly than the available transformation and normalization techniques. flowVS-based variance stabilization can help in performing comparison and alignment of phenotypically identical cell populations across different samples. flowVS and the datasets used in this paper are publicly available in Bioconductor.

  2. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    PubMed

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.

  3. Is There a Common Summary Statistical Process for Representing the Mean and Variance? A Study Using Illustrations of Familiar Items

    PubMed Central

    Yang, Yi; Tokita, Midori; Ishiguchi, Akira

    2018-01-01

    A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed. PMID:29399318

  4. Outcomes of different Class II treatments : Comparisons using the American Board of Orthodontics Model Grading System.

    PubMed

    Akinci Cansunar, Hatice; Uysal, Tancan

    2016-07-01

    The aim of this study was to evaluate the clinical outcomes of three different Class II treatment modalities followed by fixed orthodontic therapy, using the American Board of Orthodontics Model Grading System (ABO-MGS). As a retrospective study, files of patients treated at postgraduate orthodontic  clinics in different cities in Turkey was randomly selected. From 1684 posttreatment records, 669 patients were divided into three groups: 269 patients treated with extraction of two upper premolars, 198 patients treated with cervical headgear, and 202 patients treated with functional appliances. All the cases were evaluated by one researcher using ABO-MGS. The χ (2), Z test, and multivariate analysis of variance were used for statistical evaluation (p < 0.05). No significant differences were found among the groups in buccolingual inclination, overjet, occlusal relationship, and root angulation. However, there were significant differences in alignment, marginal ridge height, occlusal contact, interproximal contact measurements, and overall MGS average scores. The mean treatment time between the extraction and functional appliance groups was significantly different (p = 0.017). According to total ABO-MGS scores, headgear treatment had better results than functional appliances. The headgear group had better tooth alignment than the extraction group. Headgear treatment resulted in better occlusal contacts than the functional appliances and had lower average scores for interproximal contact measurements. Functional appliances had the worst average scores for marginal ridge height. Finally, the functional appliance group had the longest treatment times.

  5. Optimization of supercritical carbon dioxide extraction of essential oil from Dracocephalum kotschyi Boiss: An endangered medicinal plant in Iran.

    PubMed

    Nejad-Sadeghi, Masoud; Taji, Saeed; Goodarznia, Iraj

    2015-11-27

    Extraction of the essential oil from a medicinal plant called Dracocephalum kotschyi Boiss was performed by green technology of supercritical carbon dioxide (SC-CO2) extraction. A Taguchi orthogonal array design with an OA16 (4(5)) matrix was used to evaluate the effects of five extraction variables: pressure of 150-310bar, temperature of 40-60°C, average particle size of 250-1000μm, CO2 flow rate of 2-10ml/s and dynamic extraction time of 30-100min. The optimal conditions to obtain the maximum extraction yield were at 240bar, 60°C, 500μm, 10ml/s and 100min. The extraction yield under the above conditions was 2.72% (w/w) which is more than two times the maximum extraction yield that has been reported for this plant in the literature using traditional extraction techniques. Results from analysis of variance (ANOVA) indicated that the CO2 flow rate and the extraction time were the most significant factors on the extraction yield by percentage contribution of 44.27 and 28.86, respectively. Finally, the chemical composition of the essential oil was evaluated by using gas chromatography-mass spectroscopy (GC-MS). Citral, p-mentha-1,3,8-triene, D-3-carene and methyl geranate were the major components identified. Copyright © 2015. Published by Elsevier B.V.

  6. Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions

    NASA Astrophysics Data System (ADS)

    Luhar, Ashok K.

    2010-05-01

    Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.

  7. Optimization of microwave-assisted extraction and supercritical fluid extraction of carbamate pesticides in soil by experimental design methodology.

    PubMed

    Sun, Lei; Lee, Hian Kee

    2003-10-03

    Orthogonal array design (OAD) was applied for the first time to optimize microwave-assisted extraction (MAE) and supercritical fluid extraction (SFE) conditions for the analysis of four carbamates (propoxur, propham, methiocarb, chlorpropham) from soil. The theory and methodology of a new OA16 (4(4)) matrix derived from a OA16 (2(15)) matrix were developed during the MAE optimization. An analysis of variance technique was employed as the data analysis strategy in this study. Determinations of analytes were completed using high-performance liquid chromatography (HPLC) with UV detection. Four carbamates were successfully extracted from soil with recoveries ranging from 85 to 105% with good reproducibility (approximately 4.9% RSD) under the optimum MAE conditions: 30 ml methanol, 80 degrees C extraction temperature, and 6-min microwave heating. An OA8 (2(7)) matrix was employed for the SFE optimization. The average recoveries and RSD of the analytes from spiked soil by SFE were 92 and 5.5%, respectively except for propham (66.3+/-7.9%), under the following conditions: heating for 30 min at 60 degrees C under supercritical CO2 at 300 kg/cm2 modified with 10% (v/v) methanol. The composition of the supercritical fluid was demonstrated to be a crucial factor in the extraction. The addition of a small volume (10%) of methanol to CO2 greatly enhanced the recoveries of carbamates. A comparison of MAE with SFE was also conducted. The results indicated that >85% average recoveries were obtained by both optimized extraction techniques, and slightly higher recoveries of three carbamates (propoxur, propham and methiocarb) were achieved using MAE. SFE showed slightly higher recovery for chlorpropham (93 vs. 87% for MAE). The effects of time-aged soil on the extraction of analytes were examined and the results obtained by both methods were also compared.

  8. Prediction-error variance in Bayesian model updating: a comparative study

    NASA Astrophysics Data System (ADS)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.

  9. Seasonal and Interannual Variation of Currents and Water Properties off the Mid-East Coast of Korea

    NASA Astrophysics Data System (ADS)

    Park, J. H.; Chang, K. I.; Nam, S.

    2016-02-01

    Since 1999, physical parameters such as current, temperature, and salinity off the mid-east coast of Korea have been continuously observed from the long-term buoy station called `East-Sea Real-time Ocean monitoring Buoy (ESROB)'. Applying harmonic analysis to 6-year-long (2007-2012) depth-averaged current data from the ESROB, a mean seasonal cycle of alongshore currents, characterized by poleward current in average and equatorward current in summer, is extracted which accounts for 5.8% of the variance of 40 hours low-pass filtered currents. In spite of the small variance explained, a robust seasonality of summertime equatorward reversal typifies the low-passed alongshore currents along with low-density water. To reveal the dynamics underlying the seasonal variation, each term of linearized, depth-averaged momentum equations is estimated using the data from ESROB, adjacent tide gauge stations, and serial hydrographic stations. The result indicates that the reversal of alongshore pressure gradient is a major driver of the equatorward reversals in summer. The reanalysis wind product (MERRA) and satellite altimeter-derived sea surface height (AVISO) data show correlated features between positive (negative) wind stress curl and sea surface depression (uplift). Quantitative estimates reveal that the wind-stress curl accounts for 42% of alongshore sea level variation. Summertime low-density water originating from the northern coastal region is a footprint of the buoyancy-driven equatorward current. An interannual variation (anomalies from the mean seasonal cycle) of alongshore currents and its possible driving mechanisms will be discussed.

  10. Comprehensive Analysis of LC/MS Data Using Pseudocolor Plots

    NASA Astrophysics Data System (ADS)

    Crutchfield, Christopher A.; Olson, Matthew T.; Gourgari, Evgenia; Nesterova, Maria; Stratakis, Constantine A.; Yergey, Alfred L.

    2013-02-01

    We have developed new applications of the pseudocolor plot for the analysis of LC/MS data. These applications include spectral averaging, analysis of variance, differential comparison of spectra, and qualitative filtering by compound class. These applications have been motivated by the need to better understand LC/MS data generated from analysis of human biofluids. The examples presented use data generated to profile steroid hormones in urine extracts from a Cushing's disease patient relative to a healthy control, but are general to any discovery-based scanning mass spectrometry technique. In addition to new visualization techniques, we introduce a new metric of variance: the relative maximum difference from the mean. We also introduce the concept of substructure-dependent analysis of steroid hormones using precursor ion scans. These new analytical techniques provide an alternative approach to traditional untargeted metabolomics workflow. We present an approach to discovery using MS that essentially eliminates alignment or preprocessing of spectra. Moreover, we demonstrate the concept that untargeted metabolomics can be achieved using low mass resolution instrumentation.

  11. Accounting for therapist variability in couple therapy outcomes: what really matters?

    PubMed

    Owen, Jesse; Duncan, Barry; Reese, Robert Jeff; Anker, Morten; Sparks, Jacqueline

    2014-01-01

    This study examined whether therapist gender, professional discipline, experience conducting couple therapy, and average second-session alliance score would account for the variance in outcomes attributed to the therapist. The authors investigated therapist variability in couple therapy with 158 couples randomly assigned to and treated by 18 therapists in a naturalistic setting. Consistent with previous studies in individual therapy, in this study therapists accounted for 8.0% of the variance in client outcomes and 10% of the variance in client alliance scores. Therapist average alliance score and experience conducting couple therapy were salient predictors of client outcomes attributed to therapist. In contrast, therapist gender and discipline did not significantly account for the variance in client outcomes attributed to therapists. Tests of incremental validity demonstrated that therapist average alliance score and therapist experience uniquely accounted for the variance in outcomes attributed to the therapist. Emphasis on improving therapist alliance quality and specificity of therapist experience in couple therapy are discussed.

  12. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    NASA Astrophysics Data System (ADS)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-09-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  13. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  14. Aperture averaging in strong oceanic turbulence

    NASA Astrophysics Data System (ADS)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  15. Analgesic efficacy of lysine clonixinate, paracetamol and dipyrone in lower third molar extraction: a randomized controlled trial.

    PubMed

    Noronha, Vladimir-Reimar-Augusto-de Souza; Gurgel, Gladson-de Souza; Alves, Luiz-César-Fonseca; Noman-Ferreira, Luiz-Cláudio; Mendonça, Lisette-Lobato; Aguiar, Evandro-Guimarães de; Abdo, Evandro-Neves

    2009-08-01

    The purpose of this study is to compare the analgesic effect of lysine clonixinate, paracetamol and dipyrone after lower third molar extraction. The sample consisted of 90 individuals with clinical indication for inferior third molar extraction. The mean age of the sample was 22.3 years (DP +/-2.5). The individuals received the medication in unidentified bottles along with the intake instructions. The postoperative pain parameters were measured according to the Visual Analogical Scale (VAS) and the data was evaluated using the Kruskal-Wallis Test and Friedman Test, with the latter used to test different time intervals for each one of the drugs. The final sample consisted of 64 individuals, including 23 males (45.9%) and 41 females (64.1%) The mean age of the entire sample was 22.3 years (+/-2.5). The average length of the procedures was 33.9 minutes (+/-9.8). The distribution of mean values for this variable showed little variance for the different drugs (p=0.07). Lysine Clonixinate did not show any substantial impact on the postoperative pain control when compared to other drugs.

  16. Nonlinear consolidation in randomly heterogeneous highly compressible aquitards

    NASA Astrophysics Data System (ADS)

    Zapata-Norberto, Berenice; Morales-Casique, Eric; Herrera, Graciela S.

    2018-05-01

    Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. The effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards is investigated by means of one-dimensional Monte Carlo numerical simulations where the lower boundary represents the effect of an instant drop in hydraulic head due to groundwater pumping. Two thousand realizations are generated for each of the following parameters: hydraulic conductivity ( K), compression index ( C c), void ratio ( e) and m (an empirical parameter relating hydraulic conductivity and void ratio). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system when compared to a nonlinear consolidation model with deterministic initial parameters. The deterministic solution underestimates the ensemble average of total settlement when initial K is random. In addition, random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady-state conditions.

  17. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    NASA Astrophysics Data System (ADS)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  18. Decomposing delta, theta, and alpha time–frequency ERP activity from a visual oddball task using PCA

    PubMed Central

    Bernat, Edward M.; Malone, Stephen M.; Williams, William J.; Patrick, Christopher J.; Iacono, William G.

    2008-01-01

    Objective Time–frequency (TF) analysis has become an important tool for assessing electrical and magnetic brain activity from event-related paradigms. In electrical potential data, theta and delta activities have been shown to underlie P300 activity, and alpha has been shown to be inhibited during P300 activity. Measures of delta, theta, and alpha activity are commonly taken from TF surfaces. However, methods for extracting relevant activity do not commonly go beyond taking means of windows on the surface, analogous to measuring activity within a defined P300 window in time-only signal representations. The current objective was to use a data driven method to derive relevant TF components from event-related potential data from a large number of participants in an oddball paradigm. Methods A recently developed PCA approach was employed to extract TF components [Bernat, E. M., Williams, W. J., and Gehring, W. J. (2005). Decomposing ERP time-frequency energy using PCA. Clin Neurophysiol, 116(6), 1314–1334] from an ERP dataset of 2068 17 year olds (979 males). TF activity was taken from both individual trials and condition averages. Activity including frequencies ranging from 0 to 14 Hz and time ranging from stimulus onset to 1312.5 ms were decomposed. Results A coordinated set of time–frequency events was apparent across the decompositions. Similar TF components representing earlier theta followed by delta were extracted from both individual trials and averaged data. Alpha activity, as predicted, was apparent only when time–frequency surfaces were generated from trial level data, and was characterized by a reduction during the P300. Conclusions Theta, delta, and alpha activities were extracted with predictable time-courses. Notably, this approach was effective at characterizing data from a single-electrode. Finally, decomposition of TF data generated from individual trials and condition averages produced similar results, but with predictable differences. Specifically, trial level data evidenced more and more varied theta measures, and accounted for less overall variance. PMID:17027110

  19. Development and empirical validation of symmetric component measures of multidimensional constructs: customer and competitor orientation.

    PubMed

    Sørensen, Hans Eibe; Slater, Stanley F

    2008-08-01

    Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multidimensional constructs. Particular emphasis is placed on establishing a formalized three-step procedure for achieving a posteriori content validity. Then the procedure is applied to development and empirical validation of two symmetrical component measures of market orientation, customer orientation and competitor orientation. Analysis suggests that average variance extracted is particularly critical to reliability in the respecification of multi-indicator measures. In relation to this, the results also identify possible deficiencies in using Cronbach alpha for establishing reliable and valid measures.

  20. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time..., mental hospital, and ICF located within a 50-mile radius of the facility; (e) The distance and average...

  1. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  2. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  3. Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.

    The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.

  4. Variance-based selection may explain general mating patterns in social insects.

    PubMed

    Rueppell, Olav; Johnson, Nels; Rychtár, Jan

    2008-06-23

    Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.

  5. [Psychometric validation in Spanish of the Brazilian short version of the Primary Care Assessment Tools-users questionnaire for the evaluation of the orientation of health systems towards primary care].

    PubMed

    Vázquez Peña, Fernando; Harzheim, Erno; Terrasa, Sergio; Berra, Silvina

    2017-02-01

    To validate the Brazilian short version of the PCAT for adult patients in Spanish. Analysis of secondary data from studies made to validate the extended version of the PCAT questionnaire. City of Córdoba, Argentina. Primary health care. The sample consisted of 46% of parents, whose children were enrolled in secondary education in three institutes in the city of Cordoba, and the remaining 54% were adult users of the National University of Cordoba Health Insurance. Pearson's correlation coefficient comparing the extended and short versions. Goodness-of-fit indices in confirmatory factor analysis, composite reliability, average variance extracted, and Cronbach's alpha values, in order to assess the construct validity and the reliability of the short version. The values of Pearson's correlation coefficient between this short version and the long version were high .818 (P<.001), implying a very good criterion validity. The indicators of good global adjustment to the confirmatory factor analysis were good. The value of composite reliability was good (.802), but under the variance media extracted: .3306, since 3 variables had weak factorials loads. The Cronbach's alpha was acceptable (.85). The short version of the PCAT-users developed in Brazil showed an acceptable psychometric performance in Spanish as a quick assessment tool, in a comparative study with the extended version. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.

  6. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  7. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  8. In vitro Study of Noni Juice Extract Waste (Morinda citrifolia L.) and Pineapple Industrial Wastes (Ananas comosus L. Merr) as Energy Supplement in Dairy Goat Ration

    NASA Astrophysics Data System (ADS)

    Evvyernie, D.; Tjakradidjaja, A. S.; Permana, I. G.; Toharmat, T.; Insani, A.

    2018-02-01

    The aim of the study was to evaluate the potency of noni juice extract waste (Morinda citrifolia L.) and pineapple industrial wastes (Ananas comosus L. Merr) as an energy supplement in dairy goat ration through in vitro study. This study used a complete randomized design with 5 treatments and 3 rumen fluid groups. The treatments were R0 as control (60% Napier grass (NG) + 40% concentrate), R1 (45% NG + 15% noni juice extract waste + 40% concentrate) + R2 (45% NG + 15% noni juice extract waste ammoniated + 40% concentrate), R3 (45% NG + 15% pineapple peel + 40% concentrate), and R4 (45% NG + 15% pineapple crown + 40% concentrate). The variables were totalbacterial population, protozoal population, fermentation characteristic (total VFA and NH3 concentration), and digestibility (dry matter and organic matter).Data were analyzed with analysis of variance (ANOVA) and differences among treatments were determined by orthogonal contrast.The results showed that total VFA concentration was significant increased (P<0.05) by replacing 25% napier grass with noni juice extract waste (R1)or very significant increased(P<0.01) by replacing the grass with pineapple peel (R3). The average increasing of total VFA concentration was 74% compared to control. As conclusions, 15% pineapple peel or 15% noni juice extract waste can use as an energy supplement by replacing 25% of napier grass in lactating dairy goat ration.

  9. A quantitative study of gully erosion based on object-oriented analysis techniques: a case study in Beiyanzikou catchment of Qixia, Shandong, China.

    PubMed

    Wang, Tao; He, Fuhong; Zhang, Anding; Gu, Lijuan; Wen, Yangmao; Jiang, Weiguo; Shao, Hongbo

    2014-01-01

    This paper took a subregion in a small watershed gully system at Beiyanzikou catchment of Qixia, China, as a study and, using object-orientated image analysis (OBIA), extracted shoulder line of gullies from high spatial resolution digital orthophoto map (DOM) aerial photographs. Next, it proposed an accuracy assessment method based on the adjacent distance between the boundary classified by remote sensing and points measured by RTK-GPS along the shoulder line of gullies. Finally, the original surface was fitted using linear regression in accordance with the elevation of two extracted edges of experimental gullies, named Gully 1 and Gully 2, and the erosion volume was calculated. The results indicate that OBIA can effectively extract information of gullies; average range difference between points field measured along the edge of gullies and classified boundary is 0.3166 m, with variance of 0.2116 m. The erosion area and volume of two gullies are 2141.6250 m(2), 5074.1790 m(3) and 1316.1250 m(2), 1591.5784 m(3), respectively. The results of the study provide a new method for the quantitative study of small gully erosion.

  10. Amplification and dampening of soil respiration by changes in temperature variability

    USGS Publications Warehouse

    Sierra, C.A.; Harmon, M.E.; Thomann, E.; Perakis, S.S.; Loescher, H.W.

    2011-01-01

    Accelerated release of carbon from soils is one of the most important feed backs related to anthropogenically induced climate change. Studies addressing the mechanisms for soil carbon release through organic matter decomposition have focused on the effect of changes in the average temperature, with little attention to changes in temperature vari-ability. Anthropogenic activities are likely to modify both the average state and the variability of the climatic system; therefore, the effects of future warming on decomposition should not only focus on trends in the average temperature, but also variability expressed as a change of the probability distribution of temperature.Using analytical and numerical analyses we tested common relationships between temperature and respiration and found that the variability of temperature plays an important role determining respiration rates of soil organic matter. Changes in temperature variability, without changes in the average temperature, can affect the amount of carbon released through respiration over the long term. Furthermore, simultaneous changes in the average and variance of temperature can either amplify or dampen there release of carbon through soil respiration as climate regimes change. The effects depend on the degree of convexity of the relationship between temperature and respiration and the magnitude of the change in temperature variance. A potential consequence of this effect of variability would be higher respiration in regions where both the mean and variance of temperature are expected to increase, such as in some low latitude regions; and lower amounts of respiration where the average temperature is expected to increase and the variance to decrease, such as in northern high latitudes.

  11. Anthropometry as a predictor of high speed performance.

    PubMed

    Caruso, J F; Ramey, E; Hastings, L P; Monda, J K; Coday, M A; McLagan, J; Drummond, J

    2009-07-01

    To assess anthropometry as a predictor of high-speed performance, subjects performed four seated knee- and hip-extension workouts with their left leg on an inertial exercise trainer (Impulse Technologies, Newnan GA). Workouts, done exclusively in either the tonic or phasic contractile mode, entailed two one-minute sets separated by a 90-second rest period and yielded three performance variables: peak force, average force and work. Subjects provided the following anthropometric data: height, weight, body mass index, as well as total, upper and lower left leg lengths. Via multiple regression, anthropometry attempted to predict the variance per performance variable. Anthropometry explained a modest (R2=0.27-0.43) yet significant degree of variance from inertial exercise trainer workouts. Anthropometry was a better predictor of peak force variance from phasic workouts, while it accounted for a significant degree of average force and work variance solely from tonic workouts. Future research should identify variables that account for the unexplained variance from high-speed exercise performance.

  12. flowVS: channel-specific variance stabilization in flow cytometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  13. flowVS: channel-specific variance stabilization in flow cytometry

    DOE PAGES

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  14. Precipitation estimation in mountainous terrain using multivariate geostatistics. Part II: isohyetal maps

    USGS Publications Warehouse

    Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.

    1992-01-01

    Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances were reduced by an average of 54% relative to kriging variances within the study area. Cokriging reduced estimation variances at the potential repository site by 55% relative to kriging. The usefulness of an existing network of stations for measuring AAP within the study area was evaluated using cokriging variances, and twenty additional stations were located for the purpose of improving the accuracy of future isohyetal mappings. Using the expanded network of stations, the maximum cokriging estimation variance within the study area was reduced by 78% relative to the existing network, and the average estimation variance was reduced by 52%.

  15. Variance in the chemical composition of dry beans determined from UV spectral fingerprints

    USDA-ARS?s Scientific Manuscript database

    Nine varieties of dry beans representing 5 market classes were grown in 3 states (Maryland, Michigan, and Nebraska) and sub-samples were collected for each variety (row composites from each plot). Aqueous methanol extracts were analyzed in triplicate by UV spectrophotometry. Analysis of variance-p...

  16. Extracting quantitative measures from EAP: a small clinical study using BFOR.

    PubMed

    Hosseinbor, A Pasha; Chung, Moo K; Wu, Yu-Chien; Fleming, John O; Field, Aaron S; Alexander, Andrew L

    2012-01-01

    The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents, and hence providing rich information about complex tissue microstructure properties. Bessel Fourier orientation reconstruction (BFOR) is one of several analytical, non-Cartesian EAP reconstruction schemes employing multiple shell acquisitions that have recently been proposed. Such modeling bases have not yet been fully exploited in the extraction of rotationally invariant q-space indices that describe the degree of diffusion anisotropy/restrictivity. Such quantitative measures include the zero-displacement probability (P(o)), mean squared displacement (MSD), q-space inverse variance (QIV), and generalized fractional anisotropy (GFA), and all are simply scalar features of the EAP. In this study, a general relationship between MSD and q-space diffusion signal is derived and an EAP-based definition of GFA is introduced. A significant part of the paper is dedicated to utilizing BFOR in a clinical dataset, comprised of 5 multiple sclerosis (MS) patients and 4 healthy controls, to estimate P(o), MSD, QIV, and GFA of corpus callosum, and specifically, to see if such indices can detect changes between normal appearing white matter (NAWM) and healthy white matter (WM). Although the sample size is small, this study is a proof of concept that can be extended to larger sample sizes in the future.

  17. Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control

    NASA Astrophysics Data System (ADS)

    Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo

    2017-02-01

    The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.

  18. A question of separation: disentangling tracer bias and gravitational non-linearity with counts-in-cells statistics

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Feix, M.; Codis, S.; Pichon, C.; Bernardeau, F.; L'Huillier, B.; Kim, J.; Hong, S. E.; Laigle, C.; Park, C.; Shin, J.; Pogosyan, D.

    2018-02-01

    Starting from a very accurate model for density-in-cells statistics of dark matter based on large deviation theory, a bias model for the tracer density in spheres is formulated. It adopts a mean bias relation based on a quadratic bias model to relate the log-densities of dark matter to those of mass-weighted dark haloes in real and redshift space. The validity of the parametrized bias model is established using a parametrization-independent extraction of the bias function. This average bias model is then combined with the dark matter PDF, neglecting any scatter around it: it nevertheless yields an excellent model for densities-in-cells statistics of mass tracers that is parametrized in terms of the underlying dark matter variance and three bias parameters. The procedure is validated on measurements of both the one- and two-point statistics of subhalo densities in the state-of-the-art Horizon Run 4 simulation showing excellent agreement for measured dark matter variance and bias parameters. Finally, it is demonstrated that this formalism allows for a joint estimation of the non-linear dark matter variance and the bias parameters using solely the statistics of subhaloes. Having verified that galaxy counts in hydrodynamical simulations sampled on a scale of 10 Mpc h-1 closely resemble those of subhaloes, this work provides important steps towards making theoretical predictions for density-in-cells statistics applicable to upcoming galaxy surveys like Euclid or WFIRST.

  19. Psychometric evaluation of the Persian version of the Templer's Death Anxiety Scale in cancer patients.

    PubMed

    Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Bahrami, Nasim; Sharif, Saeed Pahlevan; Sharif Nia, Hamid

    2016-10-01

    In this study, 398 Iranian cancer patients completed the 15-item Templer's Death Anxiety Scale (TDAS). Tests of internal consistency, principal components analysis, and confirmatory factor analysis were conducted to assess the internal consistency and factorial validity of the Persian TDAS. The construct reliability statistic and average variance extracted were also calculated to measure construct reliability, convergent validity, and discriminant validity. Principal components analysis indicated a 3-component solution, which was generally supported in the confirmatory analysis. However, acceptable cutoffs for construct reliability, convergent validity, and discriminant validity were not fulfilled for the three subscales that were derived from the principal component analysis. This study demonstrated both the advantages and potential limitations of using the TDAS with Persian-speaking cancer patients.

  20. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  1. Monogamy has a fixation advantage based on fitness variance in an ideal promiscuity group.

    PubMed

    Garay, József; Móri, Tamás F

    2012-11-01

    We consider an ideal promiscuity group of females, which implies that all males have the same average mating success. If females have concealed ovulation, then the males' paternity chances are equal. We find that male-based monogamy will be fixed in females' promiscuity group when the stochastic Darwinian selection is described by a Markov chain.We point out that in huge populations the relative advantage (difference between average fitness of different strategies) determines primarily the end of evolution; in the case of neutrality (means are equal) the smallest variance guarantees fixation (absorption) advantage; when the means and variances are the same, then the higher third moment determines which types will be fixed in the Markov chains.

  2. Multi-objective Optimization of Solar Irradiance and Variance at Pertinent Inclination Angles

    NASA Astrophysics Data System (ADS)

    Jain, Dhanesh; Lalwani, Mahendra

    2018-05-01

    The performance of photovoltaic panel gets highly affected bychange in atmospheric conditions and angle of inclination. This article evaluates the optimum tilt angle and orientation angle (surface azimuth angle) for solar photovoltaic array in order to get maximum solar irradiance and to reduce variance of radiation at different sets or subsets of time periods. Non-linear regression and adaptive neural fuzzy interference system (ANFIS) methods are used for predicting the solar radiation. The results of ANFIS are more accurate in comparison to non-linear regression. These results are further used for evaluating the correlation and applied for estimating the optimum combination of tilt angle and orientation angle with the help of general algebraic modelling system and multi-objective genetic algorithm. The hourly average solar irradiation is calculated at different combinations of tilt angle and orientation angle with the help of horizontal surface radiation data of Jodhpur (Rajasthan, India). The hourly average solar irradiance is calculated for three cases: zero variance, with actual variance and with double variance at different time scenarios. It is concluded that monthly collected solar radiation produces better result as compared to bimonthly, seasonally, half-yearly and yearly collected solar radiation. The profit obtained for monthly varying angle has 4.6% more with zero variance and 3.8% more with actual variance, than the annually fixed angle.

  3. [Assessment of Couples' Communication in Patients with Advanced Cancer: Validation of a German Version of the Couple Communication Scale (CCS)].

    PubMed

    Conrad, Martina; Engelmann, Dorit; Friedrich, Michael; Scheffold, Katharina; Philipp, Rebecca; Schulz-Kindermann, Frank; Härter, Martin; Mehnert, Anja; Koranyi, Susan

    2018-04-13

    There are only a few valid instruments measuring couples' communication in patients with cancer for German speaking countries. The Couple Communication Scale (CCS) represents an established instrument to assess couples' communication. However, there is no evidence regarding the psychometric properties of the German version of the CCS until now and the assumed one factor structure of the CCS was not verified for patients with advanced cancer yet. The CCS was validated as a part of the study "Managing cancer and living meaningfully" (CALM) on N=136 patients with advanced cancer (≥18 years, UICC-state III/IV). The psychometric properties of the scale were calculated (factor reliability, item reliability, average variance extracted [DEV]) and a confirmatory factor analysis was conducted (Maximum Likelihood Estimation). The concurrent validity was tested against symptoms of anxiety (GAD-7), depression (BDI-II) and attachment insecurity (ECR-M16). In the confirmatory factor analysis, the one factor structure showed a low, but acceptable model fit and explained on average 49% of every item's variance (DEV). The CCS has an excellent internal consistency (Cronbachs α=0,91) and was negatively associated with attachment insecurity (ECR-M16: anxiety: r=- 0,55, p<0,01; avoidance: r=- 0,42, p<0,01) as well as with anxiety (GAD-7: r=- 0,20, p<0,05) and depression (BDI-II: r=- 0,27, p<0,01). The CCS is a reliable and valid instrument measuring couples' communication in patients with advanced cancer. © Georg Thieme Verlag KG Stuttgart · New York.

  4. A Quantitative Study of Gully Erosion Based on Object-Oriented Analysis Techniques: A Case Study in Beiyanzikou Catchment of Qixia, Shandong, China

    PubMed Central

    Wang, Tao; He, Fuhong; Zhang, Anding; Gu, Lijuan; Wen, Yangmao; Jiang, Weiguo; Shao, Hongbo

    2014-01-01

    This paper took a subregion in a small watershed gully system at Beiyanzikou catchment of Qixia, China, as a study and, using object-orientated image analysis (OBIA), extracted shoulder line of gullies from high spatial resolution digital orthophoto map (DOM) aerial photographs. Next, it proposed an accuracy assessment method based on the adjacent distance between the boundary classified by remote sensing and points measured by RTK-GPS along the shoulder line of gullies. Finally, the original surface was fitted using linear regression in accordance with the elevation of two extracted edges of experimental gullies, named Gully 1 and Gully 2, and the erosion volume was calculated. The results indicate that OBIA can effectively extract information of gullies; average range difference between points field measured along the edge of gullies and classified boundary is 0.3166 m, with variance of 0.2116 m. The erosion area and volume of two gullies are 2141.6250 m2, 5074.1790 m3 and 1316.1250 m2, 1591.5784 m3, respectively. The results of the study provide a new method for the quantitative study of small gully erosion. PMID:24616626

  5. Mobile Phones-An asset or a liability: A study based on characterization and assessment of metals in waste mobile phone components using leaching tests.

    PubMed

    Hira, Meenakshi; Yadav, Sudesh; Morthekai, P; Linda, Anurag; Kumar, Sushil; Sharma, Anupam

    2018-01-15

    The prolonged use of old fashioned gadgets, especially mobile phones, is declining readily with the advancement in technology which ultimately lead to generation of e-waste. The present study investigates the concentrations of nine metals (Ba, Cd, Cr, Cu, Fe, Ni, Pb, Sn, and Zn) in various components of the mobile phones using Toxicity Characteristic Leaching Procedure (TCLP), Waste Extraction Test (WET) and Synthetic Precipitation Leaching Procedure (SPLP). The results were compared with the threshold limits for hazardous waste defined by the California Department of Toxic Substances Control (CDTSC) and United States Environmental Protection Agency (USEPA). The average concentrations of metals were found high in PWBs. WET was found relatively aggressive as compared to TCLP and SPLP. Redundancy analysis (RDA) suggests that part of mobile, extraction test, manufacturer, mobile model and year of manufacturing explain 34.66% of the variance. According to the present study, waste mobile phones must be considered as hazardous due to the potential adverse impact of toxic metals on human health and environment. However, mobile phones can be an asset as systematic extraction and recycling could reduce the demand of primary metals mining and conserve the natural resources. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  7. Kelvin waves: a comparison study between SABER and normal mode analysis of ECMWF data

    NASA Astrophysics Data System (ADS)

    Blaauw, Marten; Garcia, Rolando; Zagar, Nedjeljka; Tribbia, Joe

    2014-05-01

    Equatorial Kelvin waves spectra are sensitive to the multi-scale variability of their source of tropical convective forcing. Moreover, Kelvin wave spectra are modified upward by changes in the background winds and stability. Recent high resolution data from observations as well as analyses are capable of resolving the slower Kelvin waves with shorter vertical wavelength near the tropical tropopause. In this presentation, results from a quantitive comparison study of stratospheric Kelvin waves in satellite data (SABER) and analysis data from the ECMWF operational archive will be shown. Temperature data from SABER is extracted over a six year period (2007-2012) with an effective vertical resolution of 2 km. Spectral power of stratospheric Kelvin waves in SABER data is isolated by selecting symmetric and eastward spectral components in the 8-20 days range. Global data from ECMWF operational analysis is extracted for the same six years on 91 model levels (top level at 0.01 hPa) and 25 km horizontal resolution. Using three-dimensional orthogonal normal-mode expansions, the input mass and wind data from ECMWF is projected onto balanced rotational modes and unbalanced inertia-gravity modes, including spectral data for pure Kelvin waves. The results show good agreement between Kelvin waves in SABER and ECMWF analyses data for: (i) the frequency shift of Kelvin wave variance with height and (ii) vertical wavelengths. Variability with respect to QBO will also be discussed. In a previous study, discrepancies in the upper stratosphere were found to be 60% and are found here to be 10% (8-20 day averaged value), which can be explained by the better stratosphere representation in the 91 model level version of the ECMWF operational model. New discrepancies in Kelvin wave variance are found in the lower stratosphere at 20 km. Averaged spectral power over the 8-20 day range is found to be 35% higher in ECMWF compared to SABER data. We compared results at 20 km with additional satellite data from HIRDLS (1 km eff. resolution) and conclude preliminary that SABER data does not represent the shortest 20 day Kelvin waves as well as HIRDLS and ECMWF operational analysis.

  8. A Visual Model for the Variance and Standard Deviation

    ERIC Educational Resources Information Center

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  9. Isolating the cow-specific part of residual energy intake in lactating dairy cows using random regressions.

    PubMed

    Fischer, A; Friggens, N C; Berry, D P; Faverdin, P

    2018-07-01

    The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.

  10. Image variance and spatial structure in remotely sensed scenes. [South Dakota, California, Missouri, Kentucky, Louisiana, Tennessee, District of Columbia, and Oregon

    NASA Technical Reports Server (NTRS)

    Woodcock, C. E.; Strahler, A. H.

    1984-01-01

    Digital images derived by scanning air photos and through acquiring aircraft and spcecraft scanner data were studied. Results show that spatial structure in scenes can be measured and logically related to texture and image variance. Imagery data were used of a South Dakota forest; a housing development in Canoga Park, California; an agricltural area in Mississppi, Louisiana, Kentucky, and Tennessee; the city of Washington, D.C.; and the Klamath National Forest. Local variance, measured as the average standard deviation of brightness values within a three-by-three moving window, reaches a peak at a resolution cell size about two-thirds to three-fourths the size of the objects within the scene. If objects are smaller than the resolution cell size of the image, this peak does not occur and local variance simply decreases with increasing resolution as spatial averaging occurs. Variograms can also reveal the size, shape, and density of objects in the scene.

  11. On statistical analysis of factors affecting anthocyanin extraction from Ixora siamensis

    NASA Astrophysics Data System (ADS)

    Mat Nor, N. A.; Arof, A. K.

    2016-10-01

    This study focused on designing an experimental model in order to evaluate the influence of operative extraction parameters employed for anthocyanin extraction from Ixora siamensis on CIE color measurements (a*, b* and color saturation). Extractions were conducted at temperatures of 30, 55 and 80°C, soaking time of 60, 120 and 180 min using acidified methanol solvent with different trifluoroacetic acid (TFA) contents of 0.5, 1.75 and 3% (v/v). The statistical evaluation was performed by running analysis of variance (ANOVA) and regression calculation to investigate the significance of the generated model. Results show that the generated regression models adequately explain the data variation and significantly represented the actual relationship between the independent variables and the responses. Analysis of variance (ANOVA) showed high coefficient determination values (R2) of 0.9687 for a*, 0.9621 for b* and 0.9758 for color saturation, thus ensuring a satisfactory fit of the developed models with the experimental data. Interaction between TFA content and extraction temperature exhibited to the highest significant influence on CIE color parameter.

  12. Measuring self-rated productivity: factor structure and variance component analysis of the Health and Work Questionnaire.

    PubMed

    von Thiele Schwarz, Ulrica; Sjöberg, Anders; Hasson, Henna; Tafvelin, Susanne

    2014-12-01

    To test the factor structure and variance components of the productivity subscales of the Health and Work Questionnaire (HWQ). A total of 272 individuals from one company answered the HWQ scale, including three dimensions (efficiency, quality, and quantity) that the respondent rated from three perspectives: their own, their supervisor's, and their coworkers'. A confirmatory factor analysis was performed, and common and unique variance components evaluated. A common factor explained 81% of the variance (reliability 0.95). All dimensions and rater perspectives contributed with unique variance. The final model provided a perfect fit to the data. Efficiency, quality, and quantity and three rater perspectives are valid parts of the self-rated productivity measurement model, but with a large common factor. Thus, the HWQ can be analyzed either as one factor or by extracting the unique variance for each subdimension.

  13. Two is better than one: joint statistics of density and velocity in concentric spheres as a cosmological probe

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.

    2017-08-01

    The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.

  14. Development of the major trauma case review tool.

    PubMed

    Curtis, Kate; Mitchell, Rebecca; McCarthy, Amy; Wilson, Kellie; Van, Connie; Kennedy, Belinda; Tall, Gary; Holland, Andrew; Foster, Kim; Dickinson, Stuart; Stelfox, Henry T

    2017-02-28

    As many as half of all patients with major traumatic injuries do not receive the recommended care, with variance in preventable mortality reported across the globe. This variance highlights the need for a comprehensive process for monitoring and reviewing patient care, central to which is a consistent peer-review process that includes trauma system safety and human factors. There is no published, evidence-informed standardised tool that considers these factors for use in adult or paediatric trauma case peer-review. The aim of this research was to develop and validate a trauma case review tool to facilitate clinical review of paediatric trauma patient care in extracting information to facilitate monitoring, inform change and enable loop closure. Development of the trauma case review tool was multi-faceted, beginning with a review of the trauma audit tool literature. Data were extracted from the literature to inform iterative tool development using a consensus approach. Inter-rater agreement was assessed for both the pilot and finalised versions of the tool. The final trauma case review tool contained ten sections, including patient factors (such as pre-existing conditions), presenting problem, a timeline of events, factors contributing to the care delivery problem (including equipment, work environment, staff action, organizational factors), positive aspects of care and the outcome of panel discussion. After refinement, the inter-rater reliability of the human factors and outcome components of the tool improved with an average 86% agreement between raters. This research developed an evidence-informed tool for use in paediatric trauma case review that considers both system safety and human factors to facilitate clinical review of trauma patient care. This tool can be used to identify opportunities for improvement in trauma care and guide quality assurance activities. Validation is required in the adult population.

  15. Validation Evidence of the Motivation for Teaching Scale in Secondary Education.

    PubMed

    Abós, Ángel; Sevil, Javier; Martín-Albo, José; Aibar, Alberto; García-González, Luis

    2018-04-10

    Grounded in self-determination theory, the aim of this study was to develop a scale with adequate psychometric properties to assess motivation for teaching and to explain some outcomes of secondary education teachers at work. The sample comprised 584 secondary education teachers. Analyses supported the five-factor model (intrinsic motivation, identified regulation, introjected regulation, external regulation and amotivation) and indicated the presence of a continuum of self-determination. Evidence of reliability was provided by Cronbach's alpha, composite reliability and average variance extracted. Multigroup confirmatory factor analyses supported the partial invariance (configural and metric) of the scale in different sub-samples, in terms of gender and type of school. Concurrent validity was analyzed by a structural equation modeling that explained 71% of the work dedication variance and 69% of the boredom at work variance. Work dedication was positively predicted by intrinsic motivation (ß = .56, p < .001) and external regulation (ß = .29, p < .001) and negatively predicted by introjected regulation (ß = -.22, p < .001) and amotivation (ß = -.49, p < .001). Boredom at work was negatively predicted by intrinsic motivation (ß = -.28, p < .005) and positively predicted by amotivation (ß = .68, p < .001). The Motivation for Teaching Scale in Secondary Education (Spanish acronym EME-ES, Escala de Motivación por la Enseñanza en Educación Secundaria) is discussed as a valid and reliable instrument. This is the first specific scale in the work context of secondary teachers that has integrated the five-factor structure together with their dedication and boredom at work.

  16. Jensen's Inequality Predicts Effects of Environmental Variation

    Treesearch

    Jonathan J. Ruel; Matthew P. Ayres

    1999-01-01

    Many biologists now recognize that environmental variance can exert important effects on patterns and processes in nature that are independent of average conditions. Jenson's inequality is a mathematical proof that is seldom mentioned in the ecological literature but which provides a powerful tool for predicting some direct effects of environmental variance in...

  17. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  18. Bioequivalence evaluation of two brands of amoxicillin/clavulanic acid 250/125 mg combination tablets in healthy human volunteers: use of replicate design approach.

    PubMed

    Idkaidek, Nasir M; Al-Ghazawi, Ahmad; Najib, Naji M

    2004-12-01

    The purpose of this study was to apply a replicate design approach to a bioequivalence study of amoxicillin/clavulanic acid combination following a 250/125 mg oral dose to 23 subjects, and to compare the analysis of individual bioequivalence with average bioequivalence. This was conducted as a 2-treatment 2-sequence 4-period crossover study. Average bioequivalence was shown, while the results from the individual bioequivalence approach had no success in showing bioequivalence. In conclusion, the individual bioequivalence approach is a strong statistical tool to test for intra-subject variances and also subject-by-formulation interaction variance compared with the average bioequivalence approach. copyright (c) 2004 John Wiley & Sons, Ltd.

  19. 99aa/99ac data sets

    Science.gov Websites

    using five different instruments, extending from day -11 to day +58 (in this archive all phases are expressed with respect to B-band maximum). In most cases, the spectra were acquired using different . The supernova spectrum was extracted using the variance weighted optimal aperture extraction method

  20. Some variance reduction methods for numerical stochastic homogenization

    PubMed Central

    Blanc, X.; Le Bris, C.; Legoll, F.

    2016-01-01

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  1. SU-D-BRA-07: A Phantom Study to Assess the Variability in Radiomics Features Extracted From Cone-Beam CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fave, X; Fried, D; UT Health Science Center Graduate School of Biomedical Sciences, Houston, TX

    2015-06-15

    Purpose: Several studies have demonstrated the prognostic potential for texture features extracted from CT images of non-small cell lung cancer (NSCLC) patients. The purpose of this study was to determine if these features could be extracted with high reproducibility from cone-beam CT (CBCT) images in order for features to be easily tracked throughout a patient’s treatment. Methods: Two materials in a radiomics phantom, designed to approximate NSCLC tumor texture, were used to assess the reproducibility of 26 features. This phantom was imaged on 9 CBCT scanners, including Elekta and Varian machines. Thoracic and head imaging protocols were acquired on eachmore » machine. CBCT images from 27 NSCLC patients imaged using the thoracic protocol on Varian machines were obtained for comparison. The variance for each texture measured from these patients was compared to the variance in phantom values for different manufacturer/protocol subsets. Levene’s test was used to identify features which had a significantly smaller variance in the phantom scans versus the patient data. Results: Approximately half of the features (13/26 for material1 and 15/26 for material2) had a significantly smaller variance (p<0.05) between Varian thoracic scans of the phantom compared to patient scans. Many of these same features remained significant for the head scans on Varian (12/26 and 8/26). However, when thoracic scans from Elekta and Varian were combined, only a few features were still significant (4/26 and 5/26). Three features (skewness, coarsely filtered mean and standard deviation) were significant in almost all manufacturer/protocol subsets. Conclusion: Texture features extracted from CBCT images of a radiomics phantom are reproducible and show significantly less variation than the same features measured from patient images when images from the same manufacturer or with similar parameters are used. Reproducibility between CBCT scanners may be high enough to allow the extraction of meaningful texture values for patients. This project was funded in part by the Cancer Prevention Research Institute of Texas (CPRIT). Xenia Fave is a recipient of the American Association of Physicists in Medicine Graduate Fellowship.« less

  2. How the variance of some extraction variables may affect the quality of espresso coffees served in coffee shops.

    PubMed

    Severini, Carla; Derossi, Antonio; Fiore, Anna G; De Pilli, Teresa; Alessandrino, Ofelia; Del Mastro, Arcangela

    2016-07-01

    To improve the quality of espresso coffee, the variables under the control of the barista, such as grinding grade, coffee quantity and pressure applied to the coffee cake, as well as their variance, are of great importance. A nonlinear mixed effect modeling was used to obtain information on the changes in chemical attributes of espresso coffee (EC) as a function of the variability of extraction conditions. During extraction, the changes in volume were well described by a logistic model, whereas the chemical attributes were better fit by a first-order kinetic. The major source of information was contained in the grinding grade, which accounted for 87-96% of the variance of the experimental data. The variability of the grinding produced changes in caffeine content in the range of 80.03 mg and 130.36 mg when using a constant grinding grade of 6.5. The variability in volume and chemical attributes of EC is large. Grinding had the most important effect as the variability in particle size distribution observed for each grinding level had a profound effect on the quality of EC. Standardization of grinding would be of crucial importance for obtaining all espresso coffees with a high quality. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  3. An interplanetary magnetic field ensemble at 1 AU

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Goldstein, M. L.; King, J. H.

    1985-01-01

    A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.

  4. One-shot estimate of MRMC variance: AUC.

    PubMed

    Gallas, Brandon D

    2006-03-01

    One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.

  5. PCA feature extraction for change detection in multidimensional unlabeled data.

    PubMed

    Kuncheva, Ludmila I; Faithfull, William J

    2014-01-01

    When classifiers are deployed in real-world applications, it is assumed that the distribution of the incoming data matches the distribution of the data used to train the classifier. This assumption is often incorrect, which necessitates some form of change detection or adaptive classification. While there has been a lot of work on change detection based on the classification error monitored over the course of the operation of the classifier, finding changes in multidimensional unlabeled data is still a challenge. Here, we propose to apply principal component analysis (PCA) for feature extraction prior to the change detection. Supported by a theoretical example, we argue that the components with the lowest variance should be retained as the extracted features because they are more likely to be affected by a change. We chose a recently proposed semiparametric log-likelihood change detection criterion that is sensitive to changes in both mean and variance of the multidimensional distribution. An experiment with 35 datasets and an illustration with a simple video segmentation demonstrate the advantage of using extracted features compared to raw data. Further analysis shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.

  6. Testing the psychometric properties of the Environmental Attitudes Inventory on undergraduate students in the Arab context: A test-retest approach.

    PubMed

    AlMenhali, Entesar Ali; Khalid, Khalizani; Iyanna, Shilpa

    2018-01-01

    The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency.

  7. Testing the psychometric properties of the Environmental Attitudes Inventory on undergraduate students in the Arab context: A test-retest approach

    PubMed Central

    2018-01-01

    The Environmental Attitudes Inventory (EAI) was developed to evaluate the multidimensional nature of environmental attitudes; however, it is based on a dataset from outside the Arab context. This study reinvestigated the construct validity of the EAI with a new dataset and confirmed the feasibility of applying it in the Arab context. One hundred and forty-eight subjects in Study 1 and 130 in Study 2 provided valid responses. An exploratory factor analysis (EFA) was used to extract a new factor structure in Study 1, and confirmatory factor analysis (CFA) was performed in Study 2. Both studies generated a seven-factor model, and the model fit was discussed for both the studies. Study 2 exhibited satisfactory model fit indices compared to Study 1. Factor loading values of a few items in Study 1 affected the reliability values and average variance extracted values, which demonstrated low discriminant validity. Based on the results of the EFA and CFA, this study showed sufficient model fit and suggested the feasibility of applying the EAI in the Arab context with a good construct validity and internal consistency. PMID:29758021

  8. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    ERIC Educational Resources Information Center

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  9. Control algorithms for dynamic attenuators.

    PubMed

    Hsieh, Scott S; Pelc, Norbert J

    2014-06-01

    The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.

  10. Analysis of aperture averaging measurements. [laser scintillation data on the effect of atmospheric turbulence on signal fluctuations

    NASA Technical Reports Server (NTRS)

    Fried, D. L.

    1975-01-01

    Laser scintillation data obtained by the NASA Goddard Space Flight Center balloon flight no. 5 from White Sands Missile Range on 19 October 1973 are analyzed. The measurement data, taken with various size receiver apertures, were related to predictions of aperture averaging theory, and it is concluded that the data are in reasonable agreement with theory. The following parameters are assigned to the vertical distribution of the strength of turbulence during the period of the measurements (daytime), for lambda = 0.633 microns, and the source at the zenith; the aperture averaging length is d sub o = 0.125 m, and the log-amplitude variance is (beta sub l)2 = 0.084 square nepers. This corresponds to a normalized point intensity variance of 0.40.

  11. Quantifying predictability variations in a low-order ocean-atmosphere model - A dynamical systems approach

    NASA Technical Reports Server (NTRS)

    Nese, Jon M.; Dutton, John A.

    1993-01-01

    The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.

  12. A geometrical optics approach for modeling aperture averaging in free space optical communication applications

    NASA Astrophysics Data System (ADS)

    Yuksel, Heba; Davis, Christopher C.

    2006-09-01

    Intensity fluctuations at the receiver in free space optical (FSO) communication links lead to a received power variance that depends on the size of the receiver aperture. Increasing the size of the receiver aperture reduces the power variance. This effect of the receiver size on power variance is called aperture averaging. If there were no aperture size limitation at the receiver, then there would be no turbulence-induced scintillation. In practice, there is always a tradeoff between aperture size, transceiver weight, and potential transceiver agility for pointing, acquisition and tracking (PAT) of FSO communication links. We have developed a geometrical simulation model to predict the aperture averaging factor. This model is used to simulate the aperture averaging effect at given range by using a large number of rays, Gaussian as well as uniformly distributed, propagating through simulated turbulence into a circular receiver of varying aperture size. Turbulence is simulated by filling the propagation path with spherical bubbles of varying sizes and refractive index discontinuities statistically distributed according to various models. For each statistical representation of the atmosphere, the three-dimensional trajectory of each ray is analyzed using geometrical optics. These Monte Carlo techniques have proved capable of assessing the aperture averaging effect, in particular, the quantitative expected reduction in intensity fluctuations with increasing aperture diameter. In addition, beam wander results have demonstrated the range-cubed dependence of mean-squared beam wander. An effective turbulence parameter can also be determined by correlating beam wander behavior with the path length.

  13. Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...

  14. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  15. Concept of Aided Phytostabilization of Contaminated Soils in Postindustrial Areas

    PubMed Central

    Koda, Eugeniusz; Bilgin, Ayla; Vaverková, Mgdalena D.

    2017-01-01

    The experiment was carried out in order to evaluate the effects of trace element immobilizing soil amendments, i.e., chalcedonite, dolomite, halloysite, and diatomite on the chemical characteristics of soil contaminated with Cr and the uptake of metals by plants. The study utilized analysis of variance (ANOVA), principal component analysis (PCA) and Factor Analysis (FA). The content of trace elements in plants, pseudo-total and extracted by 0.01 M CaCl2, were determined using the method of spectrophotometry. All of the investigated element contents in the tested parts of Indian mustard (Brassica juncea L.) differed significantly in the case of applying amendments to the soil, as well as Cr contamination. The greatest average above-ground biomass was observed when halloysite and dolomite were amended to the soil. Halloysite caused significant increases of Cr concentrations in the roots. The obtained values of bioconcentration and translocation factors observed for halloysite treatment indicate the effectiveness of using Indian mustard in phytostabilization techniques. The addition of diatomite significantly increased soil pH. Halloysite and chalcedonite were shown to be the most effective and decreased the average Cr, Cu and Zn contents in soil. PMID:29295511

  16. Design and Test Research on Cutting Blade of Corn Harvester Based on Bionic Principle.

    PubMed

    Tian, Kunpeng; Li, Xianwang; Zhang, Bin; Chen, Qiaomin; Shen, Cheng; Huang, Jicheng

    2017-01-01

    Existing corn harvester cutting blades have problems associated with large cutting resistance, high energy consumption, and poor cut quality. Using bionics principles, a bionic blade was designed by extracting the cutting tooth profile curve of the B. horsfieldi palate. Using a double-blade cutting device testing system, a single stalk cutting performance contrast test for corn stalks obtained at harvest time was carried out. Results show that bionic blades have superior performance, demonstrated by strong cutting ability and good cut quality. Using statistical analysis of two groups of cutting test data, the average cutting force and cutting energy of bionic blades and ordinary blades were obtained as 480.24 N and 551.31 N and 3.91 J and 4.38 J, respectively. Average maximum cutting force and cutting energy consumption for the bionic blade were reduced by 12.89% and 10.73%, respectively. Variance analysis showed that both blade types had a significant effect on maximum cutting energy and cutting energy required to cut a corn stalk. This demonstrates that bionic blades have better cutting force and energy consumption reduction performance than ordinary blades.

  17. Concept of Aided Phytostabilization of Contaminated Soils in Postindustrial Areas.

    PubMed

    Radziemska, Maja; Koda, Eugeniusz; Bilgin, Ayla; Vaverková, Mgdalena D

    2017-12-23

    The experiment was carried out in order to evaluate the effects of trace element immobilizing soil amendments, i.e., chalcedonite, dolomite, halloysite, and diatomite on the chemical characteristics of soil contaminated with Cr and the uptake of metals by plants. The study utilized analysis of variance (ANOVA), principal component analysis (PCA) and Factor Analysis (FA). The content of trace elements in plants, pseudo-total and extracted by 0.01 M CaCl₂, were determined using the method of spectrophotometry. All of the investigated element contents in the tested parts of Indian mustard ( Brassica juncea L.) differed significantly in the case of applying amendments to the soil, as well as Cr contamination. The greatest average above-ground biomass was observed when halloysite and dolomite were amended to the soil. Halloysite caused significant increases of Cr concentrations in the roots. The obtained values of bioconcentration and translocation factors observed for halloysite treatment indicate the effectiveness of using Indian mustard in phytostabilization techniques. The addition of diatomite significantly increased soil pH. Halloysite and chalcedonite were shown to be the most effective and decreased the average Cr, Cu and Zn contents in soil.

  18. Iris recognition based on key image feature extraction.

    PubMed

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  19. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    NASA Astrophysics Data System (ADS)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  20. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  1. Classification of Birds and Bats Using Flight Tracks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullinan, Valerie I.; Matzner, Shari; Duberstein, Corey A.

    Classification of birds and bats that use areas targeted for offshore wind farm development and the inference of their behavior is essential to evaluating the potential effects of development. The current approach to assessing the number and distribution of birds at sea involves transect surveys using trained individuals in boats or airplanes or using high-resolution imagery. These approaches are costly and have safety concerns. Based on a limited annotated library extracted from a single-camera thermal video, we provide a framework for building models that classify birds and bats and their associated behaviors. As an example, we developed a discriminant modelmore » for theoretical flight paths and applied it to data (N = 64 tracks) extracted from 5-min video clips. The agreement between model- and observer-classified path types was initially only 41%, but it increased to 73% when small-scale jitter was censored and path types were combined. Classification of 46 tracks of bats, swallows, gulls, and terns on average was 82% accurate, based on a jackknife cross-validation. Model classification of bats and terns (N = 4 and 2, respectively) was 94% and 91% correct, respectively; however, the variance associated with the tracks from these targets is poorly estimated. Model classification of gulls and swallows (N ≥ 18) was on average 73% and 85% correct, respectively. The models developed here should be considered preliminary because they are based on a small data set both in terms of the numbers of species and the identified flight tracks. Future classification models would be greatly improved by including a measure of distance between the camera and the target.« less

  2. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  3. Investigation of the factor structure of the Wechsler Adult Intelligence Scale--Fourth Edition (WAIS-IV): exploratory and higher order factor analyses.

    PubMed

    Canivez, Gary L; Watkins, Marley W

    2010-12-01

    The present study examined the factor structure of the Wechsler Adult Intelligence Scale--Fourth Edition (WAIS-IV; D. Wechsler, 2008a) standardization sample using exploratory factor analysis, multiple factor extraction criteria, and higher order exploratory factor analysis (J. Schmid & J. M. Leiman, 1957) not included in the WAIS-IV Technical and Interpretation Manual (D. Wechsler, 2008b). Results indicated that the WAIS-IV subtests were properly associated with the theoretically proposed first-order factors, but all but one factor-extraction criterion recommended extraction of one or two factors. Hierarchical exploratory analyses with the Schmid and Leiman procedure found that the second-order g factor accounted for large portions of total and common variance, whereas the four first-order factors accounted for small portions of total and common variance. It was concluded that the WAIS-IV provides strong measurement of general intelligence, and clinical interpretation should be primarily at that level.

  4. Iron deficiency chlorosis in plants as related to Fe sources in soil

    NASA Astrophysics Data System (ADS)

    Díaz, I.; Delgado, A.; de Santiago, A.; del Campillo, M. C.; Torrent, J.

    2012-04-01

    Iron deficiency chlorosis (IDC) is a relevant agricultural problem in many areas of the World where calcareous soils are dominant. Although this problem has been traditionally ascribed to the pH-buffering effect of soil carbonates, the content and type of Fe oxides in soil contribute to explain Fe uptake by plants and the incidence of this problem. During the last two decades, it has been demonstrated Fe extraction with oxalate, related to the content of poorly crystalline Fe oxides, was well-correlated with the chlorophyll content of plants and thus with the incidence of IDC. This reveals the contribution of poorly crystalline Fe oxides in soil to Fe availability to plants in calcareous soils, previously shown in microcosm experiments using ferrihydrite as Fe source in the growing media. In order to supply additional information about the contribution of Fe sources in soil to explain the incidence of IDC and to perform accurate methods to predict it, a set of experiments involving different methods to extract soil Fe and plant cultivation in pots to correlate amounts of extracted Fe with the chlorophyll content of plants (measured using the SPAD chlorophyll meter) were performed. The first experiment involved 21 soils and white lupin cultivation, sequential Fe extraction in soil to study Fe forms, and single extractions (DTPA, rapid oxalate and non-buffered hydroxylamine). After that, a set of experiments in pot involving growing of grapevine rootstocks, chickpea, and sunflower were performed, although in this case only single extractions in soil were done. The Fe fraction more closely related to chlorophyll content in plants (r = 0.5, p < 0.05) was the citrate + ascorbate (CA) extraction, which was the fraction that releases most of the Fe related to poorly crystalline Fe oxides, thus revealing the key role of these compounds in Fe supply to plants. Fe extracted with CA was more correlated with chlorophyll content in plants that oxalate extractable Fe, probably due to a more selective dissolution of poorly crystalline oxides by the former extractant. In general terms, the best correlation between extractable Fe and chlorophyll content in plants was observed with hydroxylamine, which explained from 21 to 72 % of the variance observed in chlorophyll content in plants, greater than the variance explained by the rapid oxalate (11 to 60 %, not always significant) or the classical active calcium carbonate content determination (6 to 56 %, not always significant). Extraction with DTPA provided the worse results, explaining from 18 to 36 % of the variance in chlorophyll content in plants. The good predictive value of the hydroxylamine extraction was explained by its correlation with Fe in poorly crystalline Fe oxides (estimated as CA-extractable Fe) and by its negative correlation with the active calcium carbonate content of soils.

  5. Novel images extraction model using improved delay vector variance feature extraction and multi-kernel neural network for EEG detection and prediction.

    PubMed

    Ge, Jing; Zhang, Guoping

    2015-01-01

    Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.

  6. Comparison of amplitude-decorrelation, speckle-variance and phase-variance OCT angiography methods for imaging the human retina and choroid

    PubMed Central

    Gorczynska, Iwona; Migacz, Justin V.; Zawadzki, Robert J.; Capps, Arlie G.; Werner, John S.

    2016-01-01

    We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598

  7. A comparison of coronal and interplanetary current sheet inclinations

    NASA Technical Reports Server (NTRS)

    Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.

    1983-01-01

    The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.

  8. Method and system for turbomachinery surge detection

    DOEpatents

    Faymon, David K.; Mays, Darrell C.; Xiong, Yufei

    2004-11-23

    A method and system for surge detection within a gas turbine engine, comprises: measuring the compressor discharge pressure (CDP) of the gas turbine over a period of time; determining a time derivative (CDP.sub.D ) of the measured (CDP) correcting the CDP.sub.D for altitude, (CDP.sub.DCOR); estimating a short-term average of CDP.sub.DCOR.sup.2 ; estimating a short-term average of CDP.sub.DCOR ; and determining a short-term variance of corrected CDP rate of change (CDP.sub.roc) based upon the short-term average of CDP.sub.DCOR and the short-term average of CDP.sub.DCOR.sup.2. The method and system then compares the short-term variance of corrected CDP rate of change with a pre-determined threshold (CDP.sub.proc) and signals an output when CDP.sub.roc >CDP.sub.proc. The method and system provides a signal of a surge within the gas turbine engine when CDP.sub.roc remains>CDP.sub.proc for pre-determined period of time.

  9. Metallicity fluctuation statistics in the interstellar medium and young stars - I. Variance and correlation

    NASA Astrophysics Data System (ADS)

    Krumholz, Mark R.; Ting, Yuan-Sen

    2018-04-01

    The distributions of a galaxy's gas and stars in chemical space encode a tremendous amount of information about that galaxy's physical properties and assembly history. However, present methods for extracting information from chemical distributions are based either on coarse averages measured over galactic scales (e.g. metallicity gradients) or on searching for clusters in chemical space that can be identified with individual star clusters or gas clouds on ˜1 pc scales. These approaches discard most of the information, because in galaxies gas and young stars are observed to be distributed fractally, with correlations on all scales, and the same is likely to be true of metals. In this paper we introduce a first theoretical model, based on stochastically forced diffusion, capable of predicting the multiscale statistics of metal fields. We derive the variance, correlation function, and power spectrum of the metal distribution from first principles, and determine how these quantities depend on elements' astrophysical origin sites and on the large-scale properties of galaxies. Among other results, we explain for the first time why the typical abundance scatter observed in the interstellar media of nearby galaxies is ≈0.1 dex, and we predict that this scatter will be correlated on spatial scales of ˜0.5-1 kpc, and over time-scales of ˜100-300 Myr. We discuss the implications of our results for future chemical tagging studies.

  10. Refractive index variance of cells and tissues measured by quantitative phase imaging.

    PubMed

    Shan, Mingguang; Kandel, Mikhail E; Popescu, Gabriel

    2017-01-23

    The refractive index distribution of cells and tissues governs their interaction with light and can report on morphological modifications associated with disease. Through intensity-based measurements, refractive index information can be extracted only via scattering models that approximate light propagation. As a result, current knowledge of refractive index distributions across various tissues and cell types remains limited. Here we use quantitative phase imaging and the statistical dispersion relation (SDR) to extract information about the refractive index variance in a variety of specimens. Due to the phase-resolved measurement in three-dimensions, our approach yields refractive index results without prior knowledge about the tissue thickness. With the recent progress in quantitative phase imaging systems, we anticipate that using SDR will become routine in assessing tissue optical properties.

  11. Family members' unique perspectives of the family: examining their scope, size, and relations to individual adjustment.

    PubMed

    Jager, Justin; Bornstein, Marc H; Putnick, Diane L; Hendricks, Charlene

    2012-06-01

    Using the McMaster Family Assessment Device (Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's "unique perspective" or nonshared, idiosyncratic view of the family. We used a modified multitrait-multimethod confirmatory factor analysis that (a) isolated for each family member's 6 reports of family dysfunction the nonshared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by 1 or more family members and (b) extracted common variance across each family member's set of nonshared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. In addition, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these "unique perspectives" reflect about the family are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  12. Family Members' Unique Perspectives of the Family: Examining their Scope, Size, and Relations to Individual Adjustment

    PubMed Central

    Jager, Justin; Bornstein, Marc H.; Diane, L. Putnick; Hendricks, Charlene

    2012-01-01

    Using the Family Assessment Device (FAD; Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's “unique perspective” or non-shared, idiosyncratic view of the family. To do so we used a modified multitrait-multimethod confirmatory factor analysis that (1) isolated for each family member's six reports of family dysfunction the non-shared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by one or more family members and (2) extracted common variance across each family member's set of non-shared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. Additionally, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these “unique perspectives” reflect about the family are discussed. PMID:22545933

  13. Spatial variation of ultrafine particles and black carbon in two cities: results from a short-term measurement campaign.

    PubMed

    Klompmaker, Jochem O; Montagne, Denise R; Meliefste, Kees; Hoek, Gerard; Brunekreef, Bert

    2015-03-01

    Recently, short-term monitoring campaigns have been carried out to investigate the spatial variation of air pollutants within cities. Typically, such campaigns are based on short-term measurements at relatively large numbers of locations. It is largely unknown how well these studies capture the spatial variation of long term average concentrations. The aim of this study was to evaluate the within-site temporal and between-site spatial variation of the concentration of ultrafine particles (UFPs) and black carbon (BC) in a short-term monitoring campaign. In Amsterdam and Rotterdam (the Netherlands) measurements of number counts of particles larger than 10nm as a surrogate for UFP and BC were performed at 80 sites per city. Each site was measured in three different seasons of 2013 (winter, spring, summer). Sites were selected from busy urban streets, urban background, regional background and near highways, waterways and green areas, to obtain sufficient spatial contrast. Continuous measurements were performed for 30 min per site between 9 and 16 h to avoid traffic spikes of the rush hour. Concentrations were simultaneously measured at a reference site to correct for temporal variation. We calculated within- and between-site variance components reflecting temporal and spatial variations. Variance ratios were compared with previous campaigns with longer sampling durations per sample (24h to 14 days). The within-site variance was 2.17 and 2.44 times higher than the between-site variance for UFP and BC, respectively. In two previous studies based upon longer sampling duration much smaller variance ratios were found (0.31 and 0.09 for UFP and BC). Correction for temporal variation from a reference site was less effective for the short-term monitoring campaign compared to the campaigns with longer duration. Concentrations of BC and UFP were on average 1.6 and 1.5 times higher at urban street compared to urban background sites. No significant differences between the other site types and urban background were found. The high within to between-site concentration variances may result in the loss of precision and low explained variance when average concentrations from short-term campaigns are used to develop land use regression models. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Control algorithms for dynamic attenuators

    PubMed Central

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-01-01

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods. PMID:24877818

  15. Relationship Between Depression and Specific Health Indicators Among Hypertensive African American Parents and Grandparents

    PubMed Central

    Taylor, Jacquelyn Y.; Washington, Olivia G. M.; Artinian, Nancy T.; Lichtenberg, Peter

    2010-01-01

    African Americans are at greater risk for hypertension than are other ethnic groups. This study examined relationships among hypertension, stress, and depression among 120 urban African American parents and grandparents. This study is a secondary analysis of a larger nurse-managed randomized clinical trial testing the effectiveness of a telemonitoring intervention. Baseline data used in analyses, with the exception of medication compliance, were collected at 3 months' follow-up. Health indicators, perceived stress, and social support were examined to determine their relationship with depressive symptoms. A total of 48% of the variance in depressive symptomology was explained by perceived stress and support. Health indicators including average systolic blood pressure explained 21% of the variance in depressive symptomology. The regression analysis using average diastolic blood pressure explained 26% of the variance in depressive symptomology. Based on study results, African Americans should be assessed for perceived stress and social support to alleviate depressive symptomology. PMID:18843828

  16. Conceptualizing and Testing Random Indirect Effects and Moderated Mediation in Multilevel Models: New Procedures and Recommendations

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Preacher, Kristopher J.; Gil, Karen M.

    2006-01-01

    The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects.…

  17. How the Weak Variance of Momentum Can Turn Out to be Negative

    NASA Astrophysics Data System (ADS)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  18. Synthesis of correlation filters: a generalized space-domain approach for improved filter characteristics

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.

    1990-12-01

    Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.

  19. An application of the LC-LSTM framework to the self-esteem instability case.

    PubMed

    Alessandri, Guido; Vecchione, Michele; Donnellan, Brent M; Tisak, John

    2013-10-01

    The present research evaluates the stability of self-esteem as assessed by a daily version of the Rosenberg (Society and the adolescent self-image, Princeton University Press, Princeton, 1965) general self-esteem scale (RGSE). The scale was administered to 391 undergraduates for five consecutive days. The longitudinal data were analyzed using the integrated LC-LSTM framework that allowed us to evaluate: (1) the measurement invariance of the RGSE, (2) its stability and change across the 5-day assessment period, (3) the amount of variance attributable to stable and transitory latent factors, and (4) the criterion-related validity of these factors. Results provided evidence for measurement invariance, mean-level stability, and rank-order stability of daily self-esteem. Latent state-trait analyses revealed that variances in scores of the RGSE can be decomposed into six components: stable self-esteem (40 %), ephemeral (or temporal-state) variance (36 %), stable negative method variance (9 %), stable positive method variance (4 %), specific variance (1 %) and random error variance (10 %). Moreover, latent factors associated with daily self-esteem were associated with measures of depression, implicit self-esteem, and grade point average.

  20. Multisite Reliability of Cognitive BOLD Data

    PubMed Central

    Brown, Gregory G.; Mathalon, Daniel H.; Stern, Hal; Ford, Judith; Mueller, Bryon; Greve, Douglas N.; McCarthy, Gregory; Voyvodic, Jim; Glover, Gary; Diaz, Michele; Yetter, Elizabeth; Burak Ozyurt, I.; Jorgensen, Kasper W.; Wible, Cynthia G.; Turner, Jessica A.; Thompson, Wesley K.; Potkin, Steven G.

    2010-01-01

    Investigators perform multi-site functional magnetic resonance imaging studies to increase statistical power, to enhance generalizability, and to improve the likelihood of sampling relevant subgroups. Yet undesired site variation in imaging methods could off-set these potential advantages. We used variance components analysis to investigate sources of variation in the blood oxygen level dependent (BOLD) signal across four 3T magnets in voxelwise and region of interest (ROI) analyses. Eighteen participants traveled to four magnet sites to complete eight runs of a working memory task involving emotional or neutral distraction. Person variance was more than 10 times larger than site variance for five of six ROIs studied. Person-by-site interactions, however, contributed sizable unwanted variance to the total. Averaging over runs increased between-site reliability, with many voxels showing good to excellent between-site reliability when eight runs were averaged and regions of interest showing fair to good reliability. Between-site reliability depended on the specific functional contrast analyzed in addition to the number of runs averaged. Although median effect size was correlated with between-site reliability, dissociations were observed for many voxels. Brain regions where the pooled effect size was large but between-site reliability was poor were associated with reduced individual differences. Brain regions where the pooled effect size was small but between-site reliability was excellent were associated with a balance of participants who displayed consistently positive or consistently negative BOLD responses. Although between-site reliability of BOLD data can be good to excellent, acquiring highly reliable data requires robust activation paradigms, ongoing quality assurance, and careful experimental control. PMID:20932915

  1. Analytical approximations for effective relative permeability in the capillary limit

    NASA Astrophysics Data System (ADS)

    Rabinovich, Avinoam; Li, Boxiao; Durlofsky, Louis J.

    2016-10-01

    We present an analytical method for calculating two-phase effective relative permeability, krjeff, where j designates phase (here CO2 and water), under steady state and capillary-limit assumptions. These effective relative permeabilities may be applied in experimental settings and for upscaling in the context of numerical flow simulations, e.g., for CO2 storage. An exact solution for effective absolute permeability, keff, in two-dimensional log-normally distributed isotropic permeability (k) fields is the geometric mean. We show that this does not hold for krjeff since log normality is not maintained in the capillary-limit phase permeability field (Kj=k·krj) when capillary pressure, and thus the saturation field, is varied. Nevertheless, the geometric mean is still shown to be suitable for approximating krjeff when the variance of ln⁡k is low. For high-variance cases, we apply a correction to the geometric average gas effective relative permeability using a Winsorized mean, which neglects large and small Kj values symmetrically. The analytical method is extended to anisotropically correlated log-normal permeability fields using power law averaging. In these cases, the Winsorized mean treatment is applied to the gas curves for cases described by negative power law exponents (flow across incomplete layers). The accuracy of our analytical expressions for krjeff is demonstrated through extensive numerical tests, using low-variance and high-variance permeability realizations with a range of correlation structures. We also present integral expressions for geometric-mean and power law average krjeff for the systems considered, which enable derivation of closed-form series solutions for krjeff without generating permeability realizations.

  2. Young bamboo culm: Potential food as source of fiber and starch.

    PubMed

    Felisberto, Mária Herminia Ferrari; Miyake, Patricia Satie Endo; Beraldo, Antonio Ludovico; Clerici, Maria Teresa Pedrosa Silva

    2017-11-01

    With the objective of widening the use of bamboo in the food industry, the present work aimed to produce and characterize the young bamboo culm flours from varieties Dendrocalamus asper, Bambusa tuldoides and Bambusa vulgaris as potential sources of fiber and starch. The young culms were collected, cut in three sections (bottom, middle, top), processed into flour, and they were physically, chemically and technologically analyzed. The data were obtained in triplicate and evaluated by means of average differences, using analysis of variance (ANOVA) and Scott-Knott test (p<0.05). The young bamboo culms flours presented low values for moisture content (<10g/100g), protein, lipids and ash contents (<3g/100g). Regarding the carbohydrates profile, the flours were significantly different in their sugar, starch and total fiber contents. All flour samples presented a potential for fiber extraction (>60g/100g), and the varieties B. vulgaris and D. asper, presented an additional potential for starch extraction (16 and 10g/100g, respectively). Regarding technological characteristics, all flours presented bright yellow color, lightly acidic pH (>5.0), water solubility index (WSI) lower to 2.5%, excepting D. asper, which presented a WSI superior to 7.5%. In this way, the evaluated young bamboo culms present potential application in the food industry as flours and as source of fibers; in addition, the varieties D. asper and B. vulgaris can also be used for starch extraction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Snow cover and temperature relationships in North America and Eurasia

    NASA Technical Reports Server (NTRS)

    Foster, J.; Owe, M.; Rango, A.

    1983-01-01

    In this study the snow cover extent during the autumn months in both North America and Eurasia has been related to the ensuing winter temperature as measured at several locations near the center of each continent. The relationship between autumn snow cover and the ensuing winter temperatures was found to be much better for Eurasia than for North America. For Eurasia the average snow cover extent during the autumn explained as much as 52 percent of the variance in the winter (December-February) temperatures compared to only 12 percent for North America. However, when the average winter snow cover was correlated with the average winter temperature it was found that the relationship was better for North America than for Eurasia. As much as 46 percent of the variance in the winter temperature was explained by the winter snow cover in North America compared to only 12 percent in Eurasia.

  4. Analysis of Realized Volatility for Nikkei Stock Average on the Tokyo Stock Exchange

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya; Watanabe, Toshiaki

    2016-04-01

    We calculate realized volatility of the Nikkei Stock Average (Nikkei225) Index on the Tokyo Stock Exchange and investigate the return dynamics. To avoid the bias on the realized volatility from the non-trading hours issue we calculate realized volatility separately in the two trading sessions, i.e. morning and afternoon, of the Tokyo Stock Exchange and find that the microstructure noise decreases the realized volatility at small sampling frequency. Using realized volatility as a proxy of the integrated volatility we standardize returns in the morning and afternoon sessions and investigate the normality of the standardized returns by calculating variance, kurtosis and 6th moment. We find that variance, kurtosis and 6th moment are consistent with those of the standard normal distribution, which indicates that the return dynamics of the Nikkei Stock Average are well described by a Gaussian random process with time-varying volatility.

  5. Further analysis of clinical feasibility of OCT-based glaucoma diagnosis with Pigment epithelium central limit- Inner limit of the retina Minimal Distance (PIMD)

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2017-02-01

    The present study aimed to elucidate if comparison of angular segments of Pigment epithelium central limit- Inner limit of the retina Minimal Distance, measured over 2π radians in the frontal plane (PIMD-2π) between visits of a patient, renders sufficient precision for detection of loss of nerve fibers in the optic nerve head. An optic nerve head raster scanned cube was captured with a TOPCON 3D OCT 2000 (Topcon, Japan) device in one early to moderate stage glaucoma eye of each of 13 patients. All eyes were recorded at two visits less than 1 month apart. At each visit, 3 volumes were captured. Each volume was extracted from the OCT device for analysis. Then, angular PIMD was segmented three times over 2π radians in the frontal plane, resolved with a semi-automatic algorithm in 500 equally separated steps, PIMD-2π. It was found that individual segmentations within volumes, within visits, within subjects can be phase adjusted to each other in the frontal plane using cross-correlation. Cross correlation was also used to phase adjust volumes within visits within subjects and visits to each other within subjects. Then, PIMD-2π for each subject was split into 250 bundles of 2 adjacent PIMDs. Finally, the sources of variation for estimates of segments of PIMD-2π were derived with analysis of variance assuming a mixed model. The variation among adjacent PIMDS was found very small in relation to the variation among segmentations. The variation among visits was found insignificant in relation to the variation among volumes and the variance for segmentations was found to be on the order of 20 % of that for volumes. The estimated variances imply that, if 3 segmentations are averaged within a volume and at least 10 volumes are averaged within a visit, it is possible to estimate around a 10 % reduction of a PIMD-2π segment from baseline to a subsequent visit as significant. Considering a loss rate for a PIMD-2π segment of 23 μm/yr., 4 visits per year, and averaging 3 segmentations per volume and 3 volumes per visit, a significant reduction from baseline can be detected with a power of 80 % in about 18 months. At higher loss rate for a PIMD-2π segment, a significant difference from baseline can be detected earlier. Averaging over more volumes per visit considerably decreases the time for detection of a significant reduction of a segment of PIMD-2π. Increasing the number of segmentations averaged per visit only slightly reduces the time for detection of a significant reduction. It is concluded that phase adjustment in the frontal plane with cross correlation allows high precision estimates of a segment of PIMD-2π that imply substantially shorter followup time for detection of a significant change than mean deviation (MD) in a visual field estimated with the Humphrey perimeter or neural rim area (NRA) estimated with the Heidelberg retinal tomograph.

  6. Optimization of Pressurized Liquid Extraction of Three Major Acetophenones from Cynanchum bungei Using a Box-Behnken Design

    PubMed Central

    Li, Wei; Zhao, Li-Chun; Sun, Yin-Shi; Lei, Feng-Jie; Wang, Zi; Gui, Xiong-Bin; Wang, Hui

    2012-01-01

    In this work, pressurized liquid extraction (PLE) of three acetophenones (4-hydroxyacetophenone, baishouwubenzophenone, and 2,4-dihydroxyacetophenone) from Cynanchum bungei (ACB) were investigated. The optimal conditions for extraction of ACB were obtained using a Box-Behnken design, consisting of 17 experimental points, as follows: Ethanol (100%) as the extraction solvent at a temperature of 120 °C and an extraction pressure of 1500 psi, using one extraction cycle with a static extraction time of 17 min. The extracted samples were analyzed by high-performance liquid chromatography using an UV detector. Under this optimal condition, the experimental values agreed with the predicted values by analysis of variance. The ACB extraction yield with optimal PLE was higher than that obtained by soxhlet extraction and heat-reflux extraction methods. The results suggest that the PLE method provides a good alternative for acetophenone extraction. PMID:23203079

  7. Host nutrition alters the variance in parasite transmission potential

    PubMed Central

    Vale, Pedro F.; Choisy, Marc; Little, Tom J.

    2013-01-01

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts. PMID:23407498

  8. Host nutrition alters the variance in parasite transmission potential.

    PubMed

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  9. Factors associated with feed intake of Angus steers

    USDA-ARS?s Scientific Manuscript database

    Estimates of variance components were obtained from 475 records of average (AFI) and residual feed intake (RFI). Covariates in various (8) models included average daily gain (G), age (A) and weight (W) on test, and slaughter (S) and ultrasound (U) carcass measures (fat thickness, ribeye area and ma...

  10. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  11. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    PubMed

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  12. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method.

    PubMed

    Batres-Mendoza, Patricia; Ibarra-Manzano, Mario A; Guerra-Hernandez, Erick I; Almanza-Ojeda, Dora L; Montoro-Sanjose, Carlos R; Romero-Troncoso, Rene J; Rostro-Gonzalez, Horacio

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications.

  13. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method

    PubMed Central

    Batres-Mendoza, Patricia; Guerra-Hernandez, Erick I.; Almanza-Ojeda, Dora L.; Montoro-Sanjose, Carlos R.

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications. PMID:29348744

  14. You are what you eat: diet shapes body composition, personality and behavioural stability.

    PubMed

    Han, Chang S; Dingemanse, Niels J

    2017-01-10

    Behavioural phenotypes vary within and among individuals. While early-life experiences have repeatedly been proposed to underpin interactions between these two hierarchical levels, the environmental factors causing such effects remain under-studied. We tested whether an individual's diet affected both its body composition, average behaviour (thereby causing among-individual variation or 'personality') and within-individual variability in behaviour and body weight (thereby causing among-individual differences in residual within-individual variance or 'stability'), using the Southern field cricket Gryllus bimaculatus as a model. We further asked whether effects of diet on the expression of these variance components were sex-specific. Manipulating both juvenile and adult diet in a full factorial design, individuals were put, in each life-stage, on a diet that was either relatively high in carbohydrates or relatively high in protein. We subsequently measured the expression of multiple behavioural (exploration, aggression and mating activity) and morphological traits (body weight and lipid mass) during adulthood. Dietary history affected both average phenotype and level of within-individual variability: males raised as juveniles on high-protein diets were heavier, more aggressive, more active during mating, and behaviourally less stable, than conspecifics raised on high-carbohydrate diets. Females preferred more protein in their diet compared to males, and dietary history affected average phenotype and within-individual variability in a sex-specific manner: individuals raised on high-protein diets were behaviourally less stable in their aggressiveness but this effect was only present in males. Diet also influenced individual differences in male body weight, but within-individual variance in female body weight. This study thereby provides experimental evidence that dietary history explains both heterogeneous residual within-individual variance (i.e., individual variation in 'behavioural stability') and individual differences in average behaviour (i.e., 'personality'), though dietary effects were notably trait-specific. These findings call for future studies integrating proximate and ultimate perspectives on the role of diet in the evolution of repeatedly expressed traits, such as behaviour and body weight.

  15. Estimation of genetic parameters for milk yield in Murrah buffaloes by Bayesian inference.

    PubMed

    Breda, F C; Albuquerque, L G; Euclydes, R F; Bignardi, A B; Baldi, F; Torres, R A; Barbosa, L; Tonhati, H

    2010-02-01

    Random regression models were used to estimate genetic parameters for test-day milk yield in Murrah buffaloes using Bayesian inference. Data comprised 17,935 test-day milk records from 1,433 buffaloes. Twelve models were tested using different combinations of third-, fourth-, fifth-, sixth-, and seventh-order orthogonal polynomials of weeks of lactation for additive genetic and permanent environmental effects. All models included the fixed effects of contemporary group, number of daily milkings and age of cow at calving as covariate (linear and quadratic effect). In addition, residual variances were considered to be heterogeneous with 6 classes of variance. Models were selected based on the residual mean square error, weighted average of residual variance estimates, and estimates of variance components, heritabilities, correlations, eigenvalues, and eigenfunctions. Results indicated that changes in the order of fit for additive genetic and permanent environmental random effects influenced the estimation of genetic parameters. Heritability estimates ranged from 0.19 to 0.31. Genetic correlation estimates were close to unity between adjacent test-day records, but decreased gradually as the interval between test-days increased. Results from mean squared error and weighted averages of residual variance estimates suggested that a model considering sixth- and seventh-order Legendre polynomials for additive and permanent environmental effects, respectively, and 6 classes for residual variances, provided the best fit. Nevertheless, this model presented the largest degree of complexity. A more parsimonious model, with fourth- and sixth-order polynomials, respectively, for these same effects, yielded very similar genetic parameter estimates. Therefore, this last model is recommended for routine applications. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. General object recognition is specific: Evidence from novel and familiar objects.

    PubMed

    Richler, Jennifer J; Wilmer, Jeremy B; Gauthier, Isabel

    2017-09-01

    In tests of object recognition, individual differences typically correlate modestly but nontrivially across familiar categories (e.g. cars, faces, shoes, birds, mushrooms). In theory, these correlations could reflect either global, non-specific mechanisms, such as general intelligence (IQ), or more specific mechanisms. Here, we introduce two separate methods for effectively capturing category-general performance variation, one that uses novel objects and one that uses familiar objects. In each case, we show that category-general performance variance is unrelated to IQ, thereby implicating more specific mechanisms. The first approach examines three newly developed novel object memory tests (NOMTs). We predicted that NOMTs would exhibit more shared, category-general variance than familiar object memory tests (FOMTs) because novel objects, unlike familiar objects, lack category-specific environmental influences (e.g. exposure to car magazines or botany classes). This prediction held, and remarkably, virtually none of the substantial shared variance among NOMTs was explained by IQ. Also, while NOMTs correlated nontrivially with two FOMTs (faces, cars), these correlations were smaller than among NOMTs and no larger than between the face and car tests themselves, suggesting that the category-general variance captured by NOMTs is specific not only relative to IQ, but also, to some degree, relative to both face and car recognition. The second approach averaged performance across multiple FOMTs, which we predicted would increase category-general variance by averaging out category-specific factors. This prediction held, and as with NOMTs, virtually none of the shared variance among FOMTs was explained by IQ. Overall, these results support the existence of object recognition mechanisms that, though category-general, are specific relative to IQ and substantially separable from face and car recognition. They also add sensitive, well-normed NOMTs to the tools available to study object recognition. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Pedigree- and SNP-Associated Genetics and Recent Environment are the Major Contributors to Anthropometric and Cardiometabolic Trait Variation.

    PubMed

    Xia, Charley; Amador, Carmen; Huffman, Jennifer; Trochet, Holly; Campbell, Archie; Porteous, David; Hastie, Nicholas D; Hayward, Caroline; Vitart, Veronique; Navarro, Pau; Haley, Chris S

    2016-02-01

    Genome-wide association studies have successfully identified thousands of loci for a range of human complex traits and diseases. The proportion of phenotypic variance explained by significant associations is, however, limited. Given the same dense SNP panels, mixed model analyses capture a greater proportion of phenotypic variance than single SNP analyses but the total is generally still less than the genetic variance estimated from pedigree studies. Combining information from pedigree relationships and SNPs, we examined 16 complex anthropometric and cardiometabolic traits in a Scottish family-based cohort comprising up to 20,000 individuals genotyped for ~520,000 common autosomal SNPs. The inclusion of related individuals provides the opportunity to also estimate the genetic variance associated with pedigree as well as the effects of common family environment. Trait variation was partitioned into SNP-associated and pedigree-associated genetic variation, shared nuclear family environment, shared couple (partner) environment and shared full-sibling environment. Results demonstrate that trait heritabilities vary widely but, on average across traits, SNP-associated and pedigree-associated genetic effects each explain around half the genetic variance. For most traits the recently-shared environment of couples is also significant, accounting for ~11% of the phenotypic variance on average. On the other hand, the environment shared largely in the past by members of a nuclear family or by full-siblings, has a more limited impact. Our findings point to appropriate models to use in future studies as pedigree-associated genetic effects and couple environmental effects have seldom been taken into account in genotype-based analyses. Appropriate description of the trait variation could help understand causes of intra-individual variation and in the detection of contributing loci and environmental factors.

  18. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  19. Low genetic diversity and minimal population substructure in the endangered Florida manatee: implications for conservation

    USGS Publications Warehouse

    Tucker, Kimberly Pause; Hunter, Margaret E.; Bonde, Robert K.; Austin, James D.; Clark, Ann Marie; Beck, Cathy A.; McGuire, Peter M.; Oli, Madan K.

    2012-01-01

    Species of management concern that have been affected by human activities typically are characterized by low genetic diversity, which can adversely affect their ability to adapt to environmental changes. We used 18 microsatellite markers to genotype 362 Florida manatees (Trichechus manatus latirostris), and investigated genetic diversity, population structure, and estimated genetically effective population size (Ne). The observed and expected heterozygosity and average number of alleles were 0.455 ± 0.04, 0.479 ± 0.04, and 4.77 ± 0.51, respectively. All measures of Florida manatee genetic diversity were less than averages reported for placental mammals, including fragmented or nonideal populations. Overall estimates of differentiation were low, though significantly greater than zero, and analysis of molecular variance revealed that over 95% of the total variance was among individuals within predefined management units or among individuals along the coastal subpopulations, with only minor portions of variance explained by between group variance. Although genetic issues, as inferred by neutral genetic markers, appear not to be critical at present, the Florida manatee continues to face demographic challenges due to anthropogenic activities and stochastic factors such as red tides, oil spills, and disease outbreaks; these can further reduce genetic diversity of the manatee population.

  20. Non-stationary internal tides observed with satellite altimetry

    NASA Astrophysics Data System (ADS)

    Ray, R. D.; Zaron, E. D.

    2011-09-01

    Temporal variability of the internal tide is inferred from a 17-year combined record of Topex/Poseidon and Jason satellite altimeters. A global sampling of along-track sea-surface height wavenumber spectra finds that non-stationary variance is generally 25% or less of the average variance at wavenumbers characteristic of mode-1 tidal internal waves. With some exceptions the non-stationary variance does not exceed 0.25 cm2. The mode-2 signal, where detectable, contains a larger fraction of non-stationary variance, typically 50% or more. Temporal subsetting of the data reveals interannual variability barely significant compared with tidal estimation error from 3-year records. Comparison of summer vs. winter conditions shows only one region of noteworthy seasonal changes, the northern South China Sea. Implications for the anticipated SWOT altimeter mission are briefly discussed.

  1. Non-Stationary Internal Tides Observed with Satellite Altimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Zaron, E. D.

    2011-01-01

    Temporal variability of the internal tide is inferred from a 17-year combined record of Topex/Poseidon and Jason satellite altimeters. A global sampling of along-track sea-surface height wavenumber spectra finds that non-stationary variance is generally 25% or less of the average variance at wavenumbers characteristic of mode-l tidal internal waves. With some exceptions the non-stationary variance does not exceed 0.25 sq cm. The mode-2 signal, where detectable, contains a larger fraction of non-stationary variance, typically 50% or more. Temporal subsetting of the data reveals interannual variability barely significant compared with tidal estimation error from 3-year records. Comparison of summer vs. winter conditions shows only one region of noteworthy seasonal changes, the northern South China Sea. Implications for the anticipated SWOT altimeter mission are briefly discussed.

  2. Radiomics Evaluation of Histological Heterogeneity Using Multiscale Textures Derived From 3D Wavelet Transformation of Multispectral Images.

    PubMed

    Chaddad, Ahmad; Daniel, Paul; Niazi, Tamim

    2018-01-01

    Colorectal cancer (CRC) is markedly heterogeneous and develops progressively toward malignancy through several stages which include stroma (ST), benign hyperplasia (BH), intraepithelial neoplasia (IN) or precursor cancerous lesion, and carcinoma (CA). Identification of the malignancy stage of CRC pathology tissues (PT) allows the most appropriate therapeutic intervention. This study investigates multiscale texture features extracted from CRC pathology sections using 3D wavelet transform (3D-WT) filter. Multiscale features were extracted from digital whole slide images of 39 patients that were segmented in a pre-processing step using an active contour model. The capacity for multiscale texture to compare and classify between PTs was investigated using ANOVA significance test and random forest classifier models, respectively. 12 significant features derived from the multiscale texture (i.e., variance, entropy, and energy) were found to discriminate between CRC grades at a significance value of p  < 0.01 after correction. Combining multiscale texture features lead to a better predictive capacity compared to prediction models based on individual scale features with an average (±SD) classification accuracy of 93.33 (±3.52)%, sensitivity of 88.33 (± 4.12)%, and specificity of 96.89 (± 3.88)%. Entropy was found to be the best classifier feature across all the PT grades with an average of the area under the curve (AUC) value of 91.17, 94.21, 97.70, 100% for ST, BH, IN, and CA, respectively. Our results suggest that multiscale texture features based on 3D-WT are sensitive enough to discriminate between CRC grades with the entropy feature, the best predictor of pathology grade.

  3. Inter-individual Differences in the Effects of Aircraft Noise on Sleep Fragmentation.

    PubMed

    McGuire, Sarah; Müller, Uwe; Elmenhorst, Eva-Maria; Basner, Mathias

    2016-05-01

    Environmental noise exposure disturbs sleep and impairs recuperation, and may contribute to the increased risk for (cardiovascular) disease. Noise policy and regulation are usually based on average responses despite potentially large inter-individual differences in the effects of traffic noise on sleep. In this analysis, we investigated what percentage of the total variance in noise-induced awakening reactions can be explained by stable inter-individual differences. We investigated 69 healthy subjects polysomnographically (mean ± standard deviation 40 ± 13 years, range 18-68 years, 32 male) in this randomized, balanced, double-blind, repeated measures laboratory study. This study included one adaptation night, 9 nights with exposure to 40, 80, or 120 road, rail, and/or air traffic noise events (including one noise-free control night), and one recovery night. Mixed-effects models of variance controlling for reaction probability in noise-free control nights, age, sex, number of noise events, and study night showed that 40.5% of the total variance in awakening probability and 52.0% of the total variance in EEG arousal probability were explained by inter-individual differences. If the data set was restricted to nights (4 exposure nights with 80 noise events per night), 46.7% of the total variance in awakening probability and 57.9% of the total variance in EEG arousal probability were explained by inter-individual differences. The results thus demonstrate that, even in this relatively homogeneous, healthy, adult study population, a considerable amount of the variance observed in noise-induced sleep disturbance can be explained by inter-individual differences that cannot be explained by age, gender, or specific study design aspects. It will be important to identify those at higher risk for noise induced sleep disturbance. Furthermore, the custom to base noise policy and legislation on average responses should be re-assessed based on these findings. © 2016 Associated Professional Sleep Societies, LLC.

  4. Climate, canopy disturbance, and radial growth averaging in a second-growth mixed-oak forest in West Virginia, USA

    Treesearch

    James S. Rentch; B. Desta Fekedulegn; Gary W. Miller

    2002-01-01

    This study evaluated the use of radial growth averaging as a technique of identifying canopy disturbances in a thinned 55-year-old mixed-oak stand in West Virginia. We used analysis of variance to determine the time interval (averaging period) and lag period (time between thinning and growth increase) that best captured the growth increase associated with different...

  5. Ferroelectric hydration shells around proteins: electrostatics of the protein-water interface.

    PubMed

    LeBard, David N; Matyushov, Dmitry V

    2010-07-22

    Numerical simulations of hydrated proteins show that protein hydration shells are polarized into a ferroelectric layer with large values of the average dipole moment magnitude and the dipole moment variance. The emergence of the new polarized mesophase dramatically alters the statistics of electrostatic fluctuations at the protein-water interface. The linear response relation between the average electrostatic potential and its variance breaks down, with the breadth of the electrostatic fluctuations far exceeding the expectations of the linear response theories. The dynamics of these non-Gaussian electrostatic fluctuations are dominated by a slow (approximately = 1 ns) component that freezes in at the temperature of the dynamical transition of proteins. The ferroelectric shell propagates 3-5 water diameters into the bulk.

  6. FORTRAN programs to process Magsat data for lithospheric, external field, and residual core components

    NASA Technical Reports Server (NTRS)

    Alsdorf, Douglas E.; Vonfrese, Ralph R. B.

    1994-01-01

    The FORTRAN programs supplied in this document provide a complete processing package for statistically extracting residual core, external field and lithospheric components in Magsat observations. To process the individual passes: (1) orbits are separated into dawn and dusk local times and by altitude, (2) passes are selected based on the variance of the magnetic field observations after a least-squares fit of the core field is removed from each pass over the study area, and (3) spatially adjacent passes are processed with a Fourier correlation coefficient filter to separate coherent and non-coherent features between neighboring tracks. In the second state of map processing: (1) data from the passes are normalized to a common altitude and gridded into dawn and dusk maps with least squares collocation, (2) dawn and dusk maps are correlated with a Fourier correlation efficient filter to separate coherent and non-coherent features; the coherent features are averaged to produce a total field grid, (3) total field grids from all altitudes are continued to a common altitude, correlation filtered for coherent anomaly features, and subsequently averaged to produce the final total field grid for the study region, and (4) the total field map is differentially reduced to the pole.

  7. Spatial distribution and partition of perfluoroalkyl acids (PFAAs) in rivers of the Pearl River Delta, southern China.

    PubMed

    Liu, Baolin; Zhang, Hong; Xie, Liuwei; Li, Juying; Wang, Xinxuan; Zhao, Liang; Wang, Yanping; Yang, Bo

    2015-08-15

    This study investigated the occurrence of perfluoroalkyl acids (PFAAs) in surface water from 67 sampling sites along rivers of the Pearl River Delta in southern China. Sixteen PFAAs, including perfluoroalkyl carboxylic acids (PFCAs, C5-14, C16 and C18) and perfluoroalkyl sulfonic acids (PFSAs, C4, C6, C8 and C10) were determined by high performance liquid chromatography-negative electrospray ionization-tandem mass spectrometry (HPLC/ESI-MS/MS). Total PFAA concentrations (∑ PFAAs) in the surface water ranged from 1.53 to 33.5 ng·L(-1) with an average of 7.58 ng·L(-1). Perfluorobutane sulfonic acid (PFBS), perfluorooctanoic acid (PFOA), and perfluorooctane sulfonic acid (PFOS) were the three most abundant PFAAs and on average accounted for 28%, 16% and 10% of ∑ PFAAs, respectively. Higher concentrations of ∑ PFAAs were found in the samples collected from Jiangmen section of Xijiang River, Dongguan section of Dongjiang River and the Pearl River flowing the cities which had very well-developed manufacturing industries. PCA model was employed to quantitatively calculate the contributions of extracted sources. Factor 1 (72.48% of the total variance) had high loading for perfluorohexanoic acid (PFHxA), perfluoropentanoic acid (PFPeA), PFBS and PFOS. For factor 2 (10.93% of the total variance), perfluorononanoic acid (PFNA) and perfluoroundecanoic acid (PFUdA) got high loading. The sorption of PFCAs on suspended particulate matter (SPM) increased by approximately 0.1 log units for each additional CF2 moiety and that on sediment was approximately 0.8 log units lower than the SPM logKd values. In addition, the differences in the partition coefficients were influenced by the structure discrepancy of absorbents and influx of fresh river water. These data are essential for modeling the transport and environmental fate of PFAAs. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  9. Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion

    NASA Astrophysics Data System (ADS)

    Blomquist, Mats; Wernersson, Ake V.

    1999-11-01

    When range cameras are used for analyzing irregular material on a conveyor belt there will be complications like missing segments caused by occlusion. Also, a number of range discontinuities will be present. In a frame work towards stochastic geometry, conditions are found for the cases when range discontinuities take place. The test objects in this paper are pellets for the steel industry. An illuminating laser plane will give range discontinuities at the edges of each individual object. These discontinuities are used to detect and measure the chord created by the intersection of the laser plane and the object. From the measured chords we derive the average diameter and its variance. An improved method is to use a pair of parallel illuminating light planes to extract two chords. The estimation error for this method is not larger than the natural shape fluctuations (the difference in diameter) for the pellets. The laser- camera optronics is sensitive enough both for material on a conveyor belt and free falling material leaving the conveyor.

  10. Vertical land motion controls regional sea level rise patterns on the United States east coast since 1900

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Hay, C.; Mitrovica, J. X.; Little, C. M.; Ponte, R. M.; Tingley, M.

    2017-12-01

    Understanding observed spatial variations in centennial relative sea level trends on the United States east coast has important scientific and societal applications. Past studies based on models and proxies variously suggest roles for crustal displacement, ocean dynamics, and melting of the Greenland ice sheet. Here we perform joint Bayesian inference on regional relative sea level, vertical land motion, and absolute sea level fields based on tide gauge records and GPS data. Posterior solutions show that regional vertical land motion explains most (80% median estimate) of the spatial variance in the large-scale relative sea level trend field on the east coast over 1900-2016. The posterior estimate for coastal absolute sea level rise is remarkably spatially uniform compared to previous studies, with a spatial average of 1.4-2.3 mm/yr (95% credible interval). Results corroborate glacial isostatic adjustment models and reveal that meaningful long-period, large-scale vertical velocity signals can be extracted from short GPS records.

  11. [Development of a cell phone addiction scale for korean adolescents].

    PubMed

    Koo, Hyun Young

    2009-12-01

    This study was done to develop a cell phone addiction scale for Korean adolescents. The process included construction of a conceptual framework, generation of initial items, verification of content validity, selection of secondary items, preliminary study, and extraction of final items. The participants were 577 adolescents in two middle schools and three high schools. Item analysis, factor analysis, criterion related validity, and internal consistency were used to analyze the data. Twenty items were selected for the final scale, and categorized into 3 factors explaining 55.45% of total variance. The factors were labeled as withdrawal/tolerance (7 items), life dysfunction (6 items), and compulsion/persistence (7 items). The scores for the scale were significantly correlated with self-control, impulsiveness, and cell phone use. Cronbach's alpha coefficient for the 20 items was .92. Scale scores identified students as cell phone addicted, heavy users, or average users. The above findings indicate that the cell phone addiction scale has good validity and reliability when used with Korean adolescents.

  12. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    PubMed

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  13. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    PubMed Central

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  14. RepExplore: addressing technical replicate variance in proteomics and metabolomics data analysis.

    PubMed

    Glaab, Enrico; Schneider, Reinhard

    2015-07-01

    High-throughput omics datasets often contain technical replicates included to account for technical sources of noise in the measurement process. Although summarizing these replicate measurements by using robust averages may help to reduce the influence of noise on downstream data analysis, the information on the variance across the replicate measurements is lost in the averaging process and therefore typically disregarded in subsequent statistical analyses.We introduce RepExplore, a web-service dedicated to exploit the information captured in the technical replicate variance to provide more reliable and informative differential expression and abundance statistics for omics datasets. The software builds on previously published statistical methods, which have been applied successfully to biomedical omics data but are difficult to use without prior experience in programming or scripting. RepExplore facilitates the analysis by providing a fully automated data processing and interactive ranking tables, whisker plot, heat map and principal component analysis visualizations to interpret omics data and derived statistics. Freely available at http://www.repexplore.tk enrico.glaab@uni.lu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  15. Predicting Bradycardia in Preterm Infants Using Point Process Analysis of Heart Rate.

    PubMed

    Gee, Alan H; Barbieri, Riccardo; Paydarfar, David; Indic, Premananda

    2017-09-01

    Episodes of bradycardia are common and recur sporadically in preterm infants, posing a threat to the developing brain and other vital organs. We hypothesize that bradycardias are a result of transient temporal destabilization of the cardiac autonomic control system and that fluctuations in the heart rate signal might contain information that precedes bradycardia. We investigate infant heart rate fluctuations with a novel application of point process theory. In ten preterm infants, we estimate instantaneous linear measures of the heart rate signal, use these measures to extract statistical features of bradycardia, and propose a simplistic framework for prediction of bradycardia. We present the performance of a prediction algorithm using instantaneous linear measures (mean area under the curve = 0.79 ± 0.018) for over 440 bradycardia events. The algorithm achieves an average forecast time of 116 s prior to bradycardia onset (FPR = 0.15). Our analysis reveals that increased variance in the heart rate signal is a precursor of severe bradycardia. This increase in variance is associated with an increase in power from low content dynamics in the LF band (0.04-0.2 Hz) and lower multiscale entropy values prior to bradycardia. Point process analysis of the heartbeat time series reveals instantaneous measures that can be used to predict infant bradycardia prior to onset. Our findings are relevant to risk stratification, predictive monitoring, and implementation of preventative strategies for reducing morbidity and mortality associated with bradycardia in neonatal intensive care units.

  16. Profiling physicochemical and planktonic features from discretely/continuously sampled surface water.

    PubMed

    Oita, Azusa; Tsuboi, Yuuri; Date, Yasuhiro; Oshima, Takahiro; Sakata, Kenji; Yokoyama, Akiko; Moriya, Shigeharu; Kikuchi, Jun

    2018-04-24

    There is an increasing need for assessing aquatic ecosystems that are globally endangered. Since aquatic ecosystems are complex, integrated consideration of multiple factors utilizing omics technologies can help us better understand aquatic ecosystems. An integrated strategy linking three analytical (machine learning, factor mapping, and forecast-error-variance decomposition) approaches for extracting the features of surface water from datasets comprising ions, metabolites, and microorganisms is proposed herein. The three developed approaches can be employed for diverse datasets of sample sizes and experimentally analyzed factors. The three approaches are applied to explore the features of bay water surrounding Odaiba, Tokyo, Japan, as a case study. Firstly, the machine learning approach separated 681 surface water samples within Japan into three clusters, categorizing Odaiba water into seawater with relatively low inorganic ions, including Mg, Ba, and B. Secondly, the factor mapping approach illustrated Odaiba water samples from the summer as rich in multiple amino acids and some other metabolites and poor in inorganic ions relative to other seasons based on their seasonal dynamics. Finally, forecast-error-variance decomposition using vector autoregressive models indicated that a type of microalgae (Raphidophyceae) grows in close correlation with alanine, succinic acid, and valine on filters and with isobutyric acid and 4-hydroxybenzoic acid in filtrate, Ba, and average wind speed. Our integrated strategy can be used to examine many biological, chemical, and environmental physical factors to analyze surface water. Copyright © 2018. Published by Elsevier B.V.

  17. Assessment of the reliability of human corneal endothelial cell-density estimates using a noncontact specular microscope.

    PubMed

    Doughty, M J; Müller, A; Zaman, M L

    2000-03-01

    We sought to determine the variance in endothelial cell density (ECD) estimates for human corneal endothelia. Noncontact specular micrographs were obtained from white subjects without any history of contact lens wear, or major eye disease or surgery; subjects were within four age groups (children, young adults, older adults, senior citizens). The endothelial image was scanned, and the areas from > or =75 cells measured from an overlay by planimetry. The cell-area values were used to calculate the ECD repeatedly so that the intra- and intersubject variation in an average ECD estimate could be made by using different numbers of cells (5, 10, 15, etc.). An average ECD of 3,519 cells/mm2 (range, 2,598-5,312 cells/mm2) was obtained of counts of 75 cells/ endothelium from individuals aged 6-83 years. Average ECD estimates in each age group were 4,124, 3,457, 3,360, and 3,113 cells/mm2, respectively. Analysis of intersubject variance revealed that ECD estimates would be expected to be no better than +/-10% if only 25 cells were measured per endothelium, but approach +/-2% if 75 cells are measured. In assessing the corneal endothelium by noncontact specular microscopy, cell count should be given, and this should be > or =75/ endothelium for an expected variance to be at a level close to that recommended for monitoring age-, stress-, or surgery-related changes.

  18. Null-space and statistical significance of first-arrival traveltime inversion

    NASA Astrophysics Data System (ADS)

    Morozov, Igor B.

    2004-03-01

    The strong uncertainty inherent in the traveltime inversion of first arrivals from surface sources is usually removed by using a priori constraints or regularization. This leads to the null-space (data-independent model variability) being inadequately sampled, and consequently, model uncertainties may be underestimated in traditional (such as checkerboard) resolution tests. To measure the full null-space model uncertainties, we use unconstrained Monte Carlo inversion and examine the statistics of the resulting model ensembles. In an application to 1-D first-arrival traveltime inversion, the τ-p method is used to build a set of models that are equivalent to the IASP91 model within small, ~0.02 per cent, time deviations. The resulting velocity variances are much larger, ~2-3 per cent within the regions above the mantle discontinuities, and are interpreted as being due to the null-space. Depth-variant depth averaging is required for constraining the velocities within meaningful bounds, and the averaging scalelength could also be used as a measure of depth resolution. Velocity variances show structure-dependent, negative correlation with the depth-averaging scalelength. Neither the smoothest (Herglotz-Wiechert) nor the mean velocity-depth functions reproduce the discontinuities in the IASP91 model; however, the discontinuities can be identified by the increased null-space velocity (co-)variances. Although derived for a 1-D case, the above conclusions also relate to higher dimensions.

  19. Examination of Variables That May Affect the Relationship Between Cognition and Functional Status in Individuals with Mild Cognitive Impairment: A Meta-Analysis

    PubMed Central

    Mcalister, Courtney; Schmitter-Edgecombe, Maureen; Lamb, Richard

    2016-01-01

    The objective of this meta-analysis was to improve understanding of the heterogeneity in the relationship between cognition and functional status in individuals with mild cognitive impairment (MCI). Demographic, clinical, and methodological moderators were examined. Cognition explained an average of 23% of the variance in functional outcomes. Executive function measures explained the largest amount of variance (37%), whereas global cognitive status and processing speed measures explained the least (20%). Short- and long-delayed memory measures accounted for more variance (35% and 31%) than immediate memory measures (18%), and the relationship between cognition and functional outcomes was stronger when assessed with informant-report (28%) compared with self-report (21%). Demographics, sample characteristics, and type of everyday functioning measures (i.e., questionnaire, performance-based) explained relatively little variance compared with cognition. Executive functioning, particularly measured by Trails B, was a strong predictor of everyday functioning in individuals with MCI. A large proportion of variance remained unexplained by cognition. PMID:26743326

  20. Sustainable Leadership and Future-Oriented Decision Making in the Educational Governance--A Finnish Case

    ERIC Educational Resources Information Center

    Metsamuuronen, Jari; Kuosa, Tuomo; Laukkanen, Reijo

    2013-01-01

    Purpose: During the new millennium the Finnish educational system has faced a new challenge: how to explain glorious PISA results produced with only a small variance between schools, average national costs and, as regards the average duration of studies, relatively efficiently. Explanations for this issue can be searched for in many different…

  1. Object-based vegetation classification with high resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Yu, Qian

    Vegetation species are valuable indicators to understand the earth system. Information from mapping of vegetation species and community distribution at large scales provides important insight for studying the phenological (growth) cycles of vegetation and plant physiology. Such information plays an important role in land process modeling including climate, ecosystem and hydrological models. The rapidly growing remote sensing technology has increased its potential in vegetation species mapping. However, extracting information at a species level is still a challenging research topic. I proposed an effective method for extracting vegetation species distribution from remotely sensed data and investigated some ways for accuracy improvement. The study consists of three phases. Firstly, a statistical analysis was conducted to explore the spatial variation and class separability of vegetation as a function of image scale. This analysis aimed to confirm that high resolution imagery contains the information on spatial vegetation variation and these species classes can be potentially separable. The second phase was a major effort in advancing classification by proposing a method for extracting vegetation species from high spatial resolution remote sensing data. The proposed classification employs an object-based approach that integrates GIS and remote sensing data and explores the usefulness of ancillary information. The whole process includes image segmentation, feature generation and selection, and nearest neighbor classification. The third phase introduces a spatial regression model for evaluating the mapping quality from the above vegetation classification results. The effects of six categories of sample characteristics on the classification uncertainty are examined: topography, sample membership, sample density, spatial composition characteristics, training reliability and sample object features. This evaluation analysis answered several interesting scientific questions such as (1) whether the sample characteristics affect the classification accuracy and how significant if it does; (2) how much variance of classification uncertainty can be explained by above factors. This research is carried out on a hilly peninsular area in Mediterranean climate, Point Reyes National Seashore (PRNS) in Northern California. The area mainly consists of a heterogeneous, semi-natural broadleaf and conifer woodland, shrub land, and annual grassland. A detailed list of vegetation alliances is used in this study. Research results from the first phase indicates that vegetation spatial variation as reflected by the average local variance (ALV) keeps a high level of magnitude between 1 m and 4 m resolution. (Abstract shortened by UMI.)

  2. Bridges in complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Ang-Kun; Tian, Liang; Liu, Yang-Yu

    2018-01-01

    A bridge in a graph is an edge whose removal disconnects the graph and increases the number of connected components. We calculate the fraction of bridges in a wide range of real-world networks and their randomized counterparts. We find that real networks typically have more bridges than their completely randomized counterparts, but they have a fraction of bridges that is very similar to their degree-preserving randomizations. We define an edge centrality measure, called bridgeness, to quantify the importance of a bridge in damaging a network. We find that certain real networks have a very large average and variance of bridgeness compared to their degree-preserving randomizations and other real networks. Finally, we offer an analytical framework to calculate the bridge fraction and the average and variance of bridgeness for uncorrelated random networks with arbitrary degree distributions.

  3. Foundational Performance Analyses of Pressure Gain Combustion Thermodynamic Benefits for Gas Turbines

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Kaemming, Thomas A.

    2012-01-01

    A methodology is described whereby the work extracted by a turbine exposed to the fundamentally nonuniform flowfield from a representative pressure gain combustor (PGC) may be assessed. The method uses an idealized constant volume cycle, often referred to as an Atkinson or Humphrey cycle, to model the PGC. Output from this model is used as input to a scalable turbine efficiency function (i.e., a map), which in turn allows for the calculation of useful work throughout the cycle. Integration over the entire cycle yields mass-averaged work extraction. The unsteady turbine work extraction is compared to steady work extraction calculations based on various averaging techniques for characterizing the combustor exit pressure and temperature. It is found that averages associated with momentum flux (as opposed to entropy or kinetic energy) provide the best match. This result suggests that momentum-based averaging is the most appropriate figure-of-merit to use as a PGC performance metric. Using the mass-averaged work extraction methodology, it is also found that the design turbine pressure ratio for maximum work extraction is significantly higher than that for a turbine fed by a constant pressure combustor with similar inlet conditions and equivalence ratio. Limited results are presented whereby the constant volume cycle is replaced by output from a detonation-based PGC simulation. The results in terms of averaging techniques and design pressure ratio are similar.

  4. Knowledge about dietary fibres (KADF): development and validation of an evaluation instrument through structural equation modelling (SEM).

    PubMed

    Guiné, R P F; Duarte, J; Ferreira, M; Correia, P; Leal, M; Rumbak, I; Barić, I C; Komes, D; Satalić, Z; Sarić, M M; Tarcea, M; Fazakas, Z; Jovanoska, D; Vanevski, D; Vittadini, E; Pellegrini, N; Szűcs, V; Harangozó, J; El-Kenawy, A; El-Shenawy, O; Yalçın, E; Kösemeci, C; Klava, D; Straumite, E

    2016-09-01

    Because there is scientific evidence that an appropriate intake of dietary fibre should be part of a healthy diet, given its importance in promoting health, the present study aimed to develop and validate an instrument to evaluate the knowledge of the general population about dietary fibres. The present study was a cross sectional study. The methodological study of psychometric validation was conducted with 6010 participants, residing in 10 countries from three continents. The instrument is a questionnaire of self-response, aimed at collecting information on knowledge about food fibres. Exploratory factor analysis (EFA) was chosen as the analysis of the main components using varimax orthogonal rotation and eigenvalues greater than 1. In confirmatory factor analysis by structural equation modelling (SEM) was considered the covariance matrix and adopted the maximum likelihood estimation algorithm for parameter estimation. Exploratory factor analysis retained two factors. The first was called dietary fibre and promotion of health (DFPH) and included seven questions that explained 33.94% of total variance (α = 0.852). The second was named sources of dietary fibre (SDF) and included four questions that explained 22.46% of total variance (α = 0.786). The model was tested by SEM giving a final solution with four questions in each factor. This model showed a very good fit in practically all the indexes considered, except for the ratio χ(2)/df. The values of average variance extracted (0.458 and 0.483) demonstrate the existence of convergent validity; the results also prove the existence of discriminant validity of the factors (r(2) = 0.028) and finally good internal consistency was confirmed by the values of composite reliability (0.854 and 0.787). This study allowed validating the KADF scale, increasing the degree of confidence in the information obtained through this instrument in this and in future studies. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  5. Principal Components Analysis Studies of Martian Clouds

    NASA Astrophysics Data System (ADS)

    Klassen, D. R.; Bell, J. F., III

    2001-11-01

    We present the principal components analysis (PCA) of absolutely calibrated multi-spectral images of Mars as a function of Martian season. The PCA technique is a mathematical rotation and translation of the data from a brightness/wavelength space to a vector space of principal ``traits'' that lie along the directions of maximal variance. The first of these traits, accounting for over 90% of the data variance, is overall brightness and represented by an average Mars spectrum. Interpretation of the remaining traits, which account for the remaining ~10% of the variance, is not always the same and depends upon what other components are in the scene and thus, varies with Martian season. For example, during seasons with large amounts of water ice in the scene, the second trait correlates with the ice and anti-corrlates with temperature. We will investigate the interpretation of the second, and successive important PCA traits. Although these PCA traits are orthogonal in their own vector space, it is unlikely that any one trait represents a singular, mineralogic, spectral end-member. It is more likely that there are many spectral endmembers that vary identically to within the noise level, that the PCA technique will not be able to distinguish them. Another possibility is that similar absorption features among spectral endmembers may be tied to one PCA trait, for example ''amount of 2 \\micron\\ absorption''. We thus attempt to extract spectral endmembers by matching linear combinations of the PCA traits to USGS, JHU, and JPL spectral libraries as aquired through the JPL Aster project. The recovered spectral endmembers are then linearly combined to model the multi-spectral image set. We present here the spectral abundance maps of the water ice/frost endmember which allow us to track Martian clouds and ground frosts. This work supported in part through NASA Planetary Astronomy Grant NAG5-6776. All data gathered at the NASA Infrared Telescope Facility in collaboration with the telescope operators and with thanks to the support staff and day crew.

  6. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  7. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  8. Empirical Bayes estimation of undercount in the decennial census.

    PubMed

    Cressie, N

    1989-12-01

    Empirical Bayes methods are used to estimate the extent of the undercount at the local level in the 1980 U.S. census. "Grouping of like subareas from areas such as states, counties, and so on into strata is a useful way of reducing the variance of undercount estimators. By modeling the subareas within a stratum to have a common mean and variances inversely proportional to their census counts, and by taking into account sampling of the areas (e.g., by dual-system estimation), empirical Bayes estimators that compromise between the (weighted) stratum average and the sample value can be constructed. The amount of compromise is shown to depend on the relative importance of stratum variance to sampling variance. These estimators are evaluated at the state level (51 states, including Washington, D.C.) and stratified on race/ethnicity (3 strata) using data from the 1980 postenumeration survey (PEP 3-8, for the noninstitutional population)." excerpt

  9. Factor analysis of the catatonia rating scale and catatonic symptom distribution across four diagnostic groups.

    PubMed

    Krüger, Stephanie; Bagby, R Michael; Höffler, Jürgen; Bräunig, Peter

    2003-01-01

    Catatonia is a frequent psychomotor syndrome, which has received increasing recognition over the last decade. The assessment of the catatonic syndrome requires systematic rating scales that cover the complex spectrum of catatonic motor signs and behaviors. The Catatonia Rating Scale (CRS) is such an instrument, which has been validated and which has undergone extensive reliability testing. In the present study, to further validate the CRS, the items composing this scale were submitted to principal components factor extraction followed by a varimax rotation. An analysis of variance (ANOVA) was performed to assess group differences on the extracted factors in patients with schizophrenia, pure mania, mixed mania, and major depression (N=165). Four factors were extracted, which accounted for 71.5% of the variance. The factors corresponded to the clinical syndromes of (1) catatonic excitement, (2) abnormal involuntary movements/mannerisms, (3) disturbance of volition/catalepsy, and (4) catatonic inhibition. The ANOVA revealed that each of the groups showed a distinctive catatonic symptom pattern and that the overlap between diagnostic groups was minimal. We conclude that this four-factor symptom structure of catatonia challenges the current conceptualization, which proposes only two symptom subtypes.

  10. Combining NMR spectral and structural data to form models of polychlorinated dibenzodioxins, dibenzofurans, and biphenyls binding to the AhR

    NASA Astrophysics Data System (ADS)

    Beger, Richard D.; Buzatu, Dan A.; Wilkes, Jon G.

    2002-10-01

    A three-dimensional quantitative spectrometric data-activity relationship (3D-QSDAR) modeling technique which uses NMR spectral and structural information that is combined in a 3D-connectivity matrix has been developed. A 3D-connectivity matrix was built by displaying all possible assigned carbon NMR chemical shifts, carbon-to-carbon connections, and distances between the carbons. Two-dimensional 13C-13C COSY and 2D slices from the distance dimension of the 3D-connectivity matrix were used to produce a relationship among the 2D spectral patterns for polychlorinated dibenzofurans, dibenzodioxins, and biphenyls (PCDFs, PCDDs, and PCBs respectively) binding to the aryl hydrocarbon receptor (AhR). We refer to this technique as comparative structural connectivity spectral analysis (CoSCoSA) modeling. All CoSCoSA models were developed using forward multiple linear regression analysis of the predicted 13C NMR structure-connectivity spectral bins. A CoSCoSA model for 26 PCDFs had an explained variance (r2) of 0.93 and an average leave-four-out cross-validated variance (q4 2) of 0.89. A CoSCoSA model for 14 PCDDs produced an r2 of 0.90 and an average leave-two-out cross-validated variance (q2 2) of 0.79. One CoSCoSA model for 12 PCBs gave an r2 of 0.91 and an average q2 2 of 0.80. Another CoSCoSA model for all 52 compounds had an r2 of 0.85 and an average q4 2 of 0.52. Major benefits of CoSCoSA modeling include ease of development since the technique does not use molecular docking routines.

  11. Heritability of methane emissions from dairy cows over a lactation measured on commercial farms.

    PubMed

    Pszczola, M; Rzewuska, K; Mucha, S; Strabel, T

    2017-11-01

    Methane emission is currently an important trait in studies on ruminants due to its environmental and economic impact. Recent studies were based on short-time measurements on individual cows. As methane emission is a longitudinal trait, it is important to investigate its changes over a full lactation. In this study, we aimed to estimate the heritability of the estimated methane emissions from dairy cows using Fourier-transform infrared spectroscopy during milking in an automated milking system by implementing the random regression method. The methane measurements were taken on 485 Polish Holstein-Friesian cows at 2 commercial farms located in western Poland. The overall daily estimated methane emission was 279 g/d. Genetic variance fluctuated over the course of lactation around the average level of 1,509 (g/d), with the highest level, 1,866 (g/d), at the end of the lactation. The permanent environment variance values started at 2,865 (g/d) and then dropped to around 846 (g/d) at 100 d in milk (DIM) to reach the level of 2,444 (g/d) at the end of lactation. The residual variance was estimated at 2,620 (g/d). The average repeatability was 0.25. The heritability level fluctuated over the course of lactation, starting at 0.23 (SE 0.12) and then increasing to its maximum value of 0.3 (SE 0.08) at 212 DIM and ending at the level of 0.27 (SE 0.12). Average heritability was 0.27 (average SE 0.09). We have shown that estimated methane emission is a heritable trait and that the heritability level changes over the course of lactation. The observed changes and low genetic correlations between distant DIM suggest that it may be important to consider the period in which methane phenotypes are collected.

  12. Prenatal Influences on Human Sexual Orientation: Expectations versus Data.

    PubMed

    Breedlove, S Marc

    2017-08-01

    In non-human vertebrate species, sexual differentiation of the brain is primarily driven by androgens such as testosterone organizing the brains of males in a masculine fashion early in life, while the lower levels of androgen in developing females organize their brains in a feminine fashion. These principles may be relevant to the development of sexual orientation in humans, because retrospective markers of prenatal androgen exposure, namely digit ratios and otoacoustic emissions, indicate that lesbians, on average, were exposed to greater prenatal androgen than were straight women. Thus, the even greater levels of prenatal androgen exposure experienced by fetal males may explain why the vast majority of them grow up to be attracted to women. However, the same markers indicate no significant differences between gay and straight men in terms of average prenatal androgen exposure, so the variance in orientation in men cannot be accounted for by variance in prenatal androgen exposure, but may be due to variance in response to prenatal androgens. These data contradict several popular notions about human sexual orientation. Sexual orientation in women is said to be fluid, sometimes implying that only social influences in adulthood are at work, yet the data indicate prenatal influences matter as well. Gay men are widely perceived as under-masculinized, yet the data indicate they are exposed to as much prenatal androgen as straight men. There is growing sentiment to reject "binary" conceptions of human sexual orientations, to emphasize instead a spectrum of orientations. Yet the data indicate that human sexual orientation is sufficiently polarized that groups of lesbians, on average, show evidence of greater prenatal androgen exposure than groups of straight women, while groups of gay men have, on average, a greater proportion of brothers among their older siblings than do straight men.

  13. Differences in the Predictors of Reading Comprehension in First Graders from Low Socio-Economic Status Families with Either Good or Poor Decoding Skills

    PubMed Central

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne

    2015-01-01

    Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on “poor comprehenders” by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills. PMID:25793519

  14. Differences in the predictors of reading comprehension in first graders from low socio-economic status families with either good or poor decoding skills.

    PubMed

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne

    2015-01-01

    Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.

  15. Inter-individual Differences in the Effects of Aircraft Noise on Sleep Fragmentation

    PubMed Central

    McGuire, Sarah; Müller, Uwe; Elmenhorst, Eva-Maria; Basner, Mathias

    2016-01-01

    Study Objectives: Environmental noise exposure disturbs sleep and impairs recuperation, and may contribute to the increased risk for (cardiovascular) disease. Noise policy and regulation are usually based on average responses despite potentially large inter-individual differences in the effects of traffic noise on sleep. In this analysis, we investigated what percentage of the total variance in noise-induced awakening reactions can be explained by stable inter-individual differences. Methods: We investigated 69 healthy subjects polysomnographically (mean ± standard deviation 40 ± 13 years, range 18–68 years, 32 male) in this randomized, balanced, double-blind, repeated measures laboratory study. This study included one adaptation night, 9 nights with exposure to 40, 80, or 120 road, rail, and/or air traffic noise events (including one noise-free control night), and one recovery night. Results: Mixed-effects models of variance controlling for reaction probability in noise-free control nights, age, sex, number of noise events, and study night showed that 40.5% of the total variance in awakening probability and 52.0% of the total variance in EEG arousal probability were explained by inter-individual differences. If the data set was restricted to nights (4 exposure nights with 80 noise events per night), 46.7% of the total variance in awakening probability and 57.9% of the total variance in EEG arousal probability were explained by inter-individual differences. The results thus demonstrate that, even in this relatively homogeneous, healthy, adult study population, a considerable amount of the variance observed in noise-induced sleep disturbance can be explained by inter-individual differences that cannot be explained by age, gender, or specific study design aspects. Conclusions: It will be important to identify those at higher risk for noise induced sleep disturbance. Furthermore, the custom to base noise policy and legislation on average responses should be re-assessed based on these findings. Citation: McGuire S, Müller U, Elmenhorst EM, Basner M. Inter-individual differences in the effects of aircraft noise on sleep fragmentation. SLEEP 2016;39(5):1107–1110. PMID:26856901

  16. Perceiving groups: The people perception of diversity and hierarchy.

    PubMed

    Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L

    2018-05-01

    The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Biasing and High-Order Statistics from the Southern-Sky Redshift Survey

    NASA Astrophysics Data System (ADS)

    Benoist, C.; Cappi, A.; da Costa, L. N.; Maurogordato, S.; Bouchet, F. R.; Schaeffer, R.

    1999-04-01

    We analyze different volume-limited samples extracted from the Southern-Sky Redshift Survey (SSRS2), using counts-in-cells to compute the count probability distribution function (CPDF). From the CPDF we derive volume-averaged correlation functions to fourth order and the normalized skewness and kurtosis S3=ξ3¯/ξ¯22 and S4=ξ4¯/ξ¯32. We find that the data satisfies the hierarchical relations in the range 0.3<~ξ2¯<~10. In this range we find S3 to be scale independent, with a value of ~1.8, in good agreement with the values measured from other optical redshift surveys probing different volumes, but significantly smaller than that inferred from the Automatic Plate Measuring Facility (APM) angular catalog. In addition, the measured values of S3 do not show a significant dependence on the luminosity of the galaxies considered. This result is supported by several tests of systematic errors that could affect our measures and estimates of the cosmic variance determined from mock catalogs extracted from N-body simulations. This result is in marked contrast to what would be expected from the strong dependence of the two-point correlation function on luminosity in the framework of a linear biasing model. We discuss the implications of our results and compare them to some recent models of the galaxy distribution that address the problem of bias.

  18. Extraction of natural anthocyanin and colors from pulp of jamun fruit.

    PubMed

    Maran, J Prakash; Sivakumar, V; Thirugnanasambandham, K; Sridhar, R

    2015-06-01

    In this present study, natural pigment and colors from pulp of jamun fruit were extracted under different extraction conditions such as extraction temperature (40-60 ˚C), time (20-100 min) and solid-liquid ratio (1:10-1: 15 g/ml) by aqueous extraction method. Three factors with three levels Box-Behnken response surface design was employed to optimize and investigate the effect of process variables on the responses (total anthocyanin and color). The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed to predict the responses. Optimum extraction conditions for maximizing the extraction yield of total anthocyanin (10.58 mg/100 g) and colors (10618.3 mg/l) were found to be: extraction temperature of 44 °C, extraction time of 93 min and solid-liquid ratio of 1:15 g/ml. Under these conditions, experimental values are closely agreed with predicted values.

  19. Developing an item bank to measure the coping strategies of people with hereditary retinal diseases.

    PubMed

    Prem Senthil, Mallika; Khadka, Jyoti; De Roach, John; Lamey, Tina; McLaren, Terri; Campbell, Isabella; Fenwick, Eva K; Lamoureux, Ecosse L; Pesudovs, Konrad

    2018-05-05

    Our understanding of the coping strategies used by people with visual impairment to manage stress related to visual loss is limited. This study aims to develop a sophisticated coping instrument in the form of an item bank implemented via Computerised adaptive testing (CAT) for hereditary retinal diseases. Items on coping were extracted from qualitative interviews with patients which were supplemented by items from a literature review. A systematic multi-stage process of item refinement was carried out followed by expert panel discussion and cognitive interviews. The final coping item bank had 30 items. Rasch analysis was used to assess the psychometric properties. A CAT simulation was carried out to estimate an average number of items required to gain precise measurement of hereditary retinal disease-related coping. One hundred eighty-nine participants answered the coping item bank (median age = 58 years). The coping scale demonstrated good precision and targeting. The standardised residual loadings for items revealed six items grouped together. Removal of the six items reduced the precision of the main coping scale and worsened the variance explained by the measure. Therefore, the six items were retained within the main scale. Our CAT simulation indicated that, on average, less than 10 items are required to gain a precise measurement of coping. This is the first study to develop a psychometrically robust coping instrument for hereditary retinal diseases. CAT simulation indicated that on an average, only four and nine items were required to gain measurement at moderate and high precision, respectively.

  20. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    PubMed

    Jurczyk, Jan; Eckrot, Alexander; Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  1. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model

    PubMed Central

    Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor’s behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign. PMID:27351482

  2. Versatile Gaussian probes for squeezing estimation

    NASA Astrophysics Data System (ADS)

    Rigovacca, Luca; Farace, Alessandro; Souza, Leonardo A. M.; De Pasquale, Antonella; Giovannetti, Vittorio; Adesso, Gerardo

    2017-05-01

    We consider an instance of "black-box" quantum metrology in the Gaussian framework, where we aim to estimate the amount of squeezing applied on an input probe, without previous knowledge on the phase of the applied squeezing. By taking the quantum Fisher information (QFI) as the figure of merit, we evaluate its average and variance with respect to this phase in order to identify probe states that yield good precision for many different squeezing directions. We first consider the case of single-mode Gaussian probes with the same energy, and find that pure squeezed states maximize the average quantum Fisher information (AvQFI) at the cost of a performance that oscillates strongly as the squeezing direction is changed. Although the variance can be brought to zero by correlating the probing system with a reference mode, the maximum AvQFI cannot be increased in the same way. A different scenario opens if one takes into account the effects of photon losses: coherent states represent the optimal single-mode choice when losses exceed a certain threshold and, moreover, correlated probes can now yield larger AvQFI values than all single-mode states, on top of having zero variance.

  3. Meta-analysis of the relationships between Kerr and Jermier's substitutes for leadership and employee job attitudes, role perceptions, and performance.

    PubMed

    Podsakoff, P M; MacKenzie, S B; Bommer, W H

    1996-08-01

    A meta-analysis was conducted to estimate more accurately the bivariate relationships between leadership behaviors, substitutes for leadership, and subordinate attitudes, and role perceptions and performance, and to examine the relative strengths of the relationships between these variables. Estimates of 435 relationships were obtained from 22 studies containing 36 independent samples. The findings showed that the combination of the substitutes variables and leader behaviors account for a majority of the variance in employee attitudes and role perceptions and a substantial proportion of the variance in in-role and extra-role performance; on average, the substitutes for leadership uniquely accounted for more of the variance in the criterion variables than did leader behaviors.

  4. Examination of Variables That May Affect the Relationship Between Cognition and Functional Status in Individuals with Mild Cognitive Impairment: A Meta-Analysis.

    PubMed

    Mcalister, Courtney; Schmitter-Edgecombe, Maureen; Lamb, Richard

    2016-03-01

    The objective of this meta-analysis was to improve understanding of the heterogeneity in the relationship between cognition and functional status in individuals with mild cognitive impairment (MCI). Demographic, clinical, and methodological moderators were examined. Cognition explained an average of 23% of the variance in functional outcomes. Executive function measures explained the largest amount of variance (37%), whereas global cognitive status and processing speed measures explained the least (20%). Short- and long-delayed memory measures accounted for more variance (35% and 31%) than immediate memory measures (18%), and the relationship between cognition and functional outcomes was stronger when assessed with informant-report (28%) compared with self-report (21%). Demographics, sample characteristics, and type of everyday functioning measures (i.e., questionnaire, performance-based) explained relatively little variance compared with cognition. Executive functioning, particularly measured by Trails B, was a strong predictor of everyday functioning in individuals with MCI. A large proportion of variance remained unexplained by cognition. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  6. New method of extracting information of arterial oxygen saturation based on ∑ | 𝚫 |

    NASA Astrophysics Data System (ADS)

    Dai, Wenting; Lin, Ling; Li, Gang

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  7. New method of extracting information of arterial oxygen saturation based on ∑|𝚫 |

    NASA Astrophysics Data System (ADS)

    Wenting, Dai; Ling, Lin; Gang, Li

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  8. Box-Behnken design for investigation of microwave-assisted extraction of patchouli oil

    NASA Astrophysics Data System (ADS)

    Kusuma, Heri Septya; Mahfud, Mahfud

    2015-12-01

    Microwave-assisted extraction (MAE) technique was employed to extract the essential oil from patchouli (Pogostemon cablin). The optimal conditions for microwave-assisted extraction of patchouli oil were determined by response surface methodology. A Box-Behnken design (BBD) was applied to evaluate the effects of three independent variables (microwave power (A: 400-800 W), plant material to solvent ratio (B: 0.10-0.20 g mL-1) and extraction time (C: 20-60 min)) on the extraction yield of patchouli oil. The correlation analysis of the mathematical-regression model indicated that quadratic polynomial model could be employed to optimize the microwave extraction of patchouli oil. The optimal extraction conditions of patchouli oil was microwave power 634.024 W, plant material to solvent ratio 0.147648 g ml-1 and extraction time 51.6174 min. The maximum patchouli oil yield was 2.80516% under these optimal conditions. Under the extraction condition, the experimental values agreed with the predicted results by analysis of variance. It indicated high fitness of the model used and the success of response surface methodology for optimizing and reflect the expected extraction condition.

  9. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  10. Metabolomics fingerprint of coffee species determined by untargeted-profiling study using LC-HRMS.

    PubMed

    Souard, Florence; Delporte, Cédric; Stoffelen, Piet; Thévenot, Etienne A; Noret, Nausicaa; Dauvergne, Bastien; Kauffmann, Jean-Michel; Van Antwerpen, Pierre; Stévigny, Caroline

    2018-04-15

    Coffee bean extracts are consumed all over the world as beverage and there is a growing interest in coffee leaf extracts as food supplements. The wild diversity in Coffea (Rubiaceae) genus is large and could offer new opportunities and challenges. In the present work, a metabolomics approach was implemented to examine leaf chemical composition of 9 Coffea species grown in the same environmental conditions. Leaves were analyzed by LC-HRMS and a comprehensive statistical workflow was designed. It served for univariate hypothesis testing and multivariate modeling by PCA and partial PLS-DA on the Workflow4Metabolomics infrastructure. The first two axes of PCA and PLS-DA describes more than 40% of variances with good values of explained variances. This strategy permitted to investigate the metabolomics data and their relation with botanic and genetic informations. Finally, the identification of several key metabolites for the discrimination between species was further characterized. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Intra-individual variation in urinary iodine concentration: effect of statistical correction on population distribution using seasonal three-consecutive-day spot urine in children

    PubMed Central

    Ji, Xiaohong; Liu, Peng; Sun, Zhenqi; Su, Xiaohui; Wang, Wei; Gao, Yanhui; Sun, Dianjun

    2016-01-01

    Objective To determine the effect of statistical correction for intra-individual variation on estimated urinary iodine concentration (UIC) by sampling on 3 consecutive days in four seasons in children. Setting School-aged children from urban and rural primary schools in Harbin, Heilongjiang, China. Participants 748 and 640 children aged 8–11 years were recruited from urban and rural schools, respectively, in Harbin. Primary and secondary outcome measures The spot urine samples were collected once a day for 3 consecutive days in each season over 1 year. The UIC of the first day was corrected by two statistical correction methods: the average correction method (average of days 1, 2; average of days 1, 2 and 3) and the variance correction method (UIC of day 1 corrected by two replicates and by three replicates). The variance correction method determined the SD between subjects (Sb) and within subjects (Sw), and calculated the correction coefficient (Fi), Fi=Sb/(Sb+Sw/di), where di was the number of observations. The UIC of day 1 was then corrected using the following equation: Results The variance correction methods showed the overall Fi was 0.742 for 2 days’ correction and 0.829 for 3 days’ correction; the values for the seasons spring, summer, autumn and winter were 0.730, 0.684, 0.706 and 0.703 for 2 days’ correction and 0.809, 0.742, 0.796 and 0.804 for 3 days’ correction, respectively. After removal of the individual effect, the correlation coefficient between consecutive days was 0.224, and between non-consecutive days 0.050. Conclusions The variance correction method is effective for correcting intra-individual variation in estimated UIC following sampling on 3 consecutive days in four seasons in children. The method varies little between ages, sexes and urban or rural setting, but does vary between seasons. PMID:26920442

  12. Ozone and nitrogen dioxide above the northern Tien Shan

    NASA Technical Reports Server (NTRS)

    Arefev, Vladimir N.; Volkovitsky, Oleg A.; Kamenogradsky, Nikita E.; Semyonov, Vladimir K.; Sinyakov, Valery P.

    1994-01-01

    The results of systematic perennial measurements of the total ozone (since 1979) and nitrogen dioxide column (since 1983) in the atmosphere in the European-Asian continent center above the mountainmass of the Tien Shan are given. This region is distinguished by a great number of sunny days during a year. The observation station is at the Northern shore of Issyk Kul Lake (42.56 N 77.04 E 1650 m above the sea level). The measurement results are presented as the monthly averaged atmospheric total ozone and NO2 stratospheric column abundances (morning and evening). The peculiarities of seasonal variations of ozone and nitrogen dioxide atmospheric contents, their regular variances with a quasi-biennial cycles and trends have been noticed. Irregular variances of ozone and nitrogen dioxide atmospheric contents, i.e. their positive and negative anomalies in the monthly averaged contents relative to the perennial averaged monthly means, have been analyzed. The synchronous and opposite in phase anomalies in variations of ozone and nitrogen dioxide atmospheric contents were explained by the transport and zonal circulation in the stratosphere (Kamenogradsky et al., 1990).

  13. Reducing elective general surgery cancellations at a Canadian hospital

    PubMed Central

    Azari-Rad, Solmaz; Yontef, Alanna L.; Aleman, Dionne M.; Urbach, David R.

    2013-01-01

    Background In Canadian hospitals, which are typically financed by global annual budgets, overuse of operating rooms is a financial risk that is frequently managed by cancelling elective surgical procedures. It is uncertain how different scheduling rules affect the rate of elective surgery cancellations. Methods We used discrete event simulation modelling to represent perioperative processes at a hospital in Toronto, Canada. We tested the effects of the following 3 scenarios on the number of surgical cancellations: scheduling surgeons’ operating days based on their patients’ average length of stay in hospital, sequencing surgical procedures by average duration and variance, and increasing the number of post-surgical ward beds. Results The number of elective cancellations was reduced by scheduling surgeons whose patients had shorter average lengths of stay in hospital earlier in the week, sequencing shorter surgeries and those with less variance in duration earlier in the day, and by adding up to 2 additional beds to the postsurgical ward. Conclusion Discrete event simulation modelling can be used to develop strategies for improving efficiency in operating rooms. PMID:23351498

  14. Effects of P Element Insertions on Quantitative Traits in Drosophila Melanogaster

    PubMed Central

    Mackay, TFC.; Lyman, R. F.; Jackson, M. S.

    1992-01-01

    P element mutagenesis was used to construct 94 third chromosome lines of Drosophila melanogaster which contained on average 3.1 stable P element inserts, in an inbred host strain background previously free of P elements. The homozygous and heterozygous effects of the inserts on viability and abdominal and sternopleural bristle number were ascertained by comparing the chromosome lines with inserts to insert-free control lines of the inbred host strain. P elements reduced average homozygous viability by 12.2% per insert and average heterozygous viability by 5.5% per insert, and induced recessive lethal mutations at a rate of 3.8% per insert. Mutational variation for the bristle traits averaged over both sexes was 0.03V(e) per homozygous P insert and 0.003V(e) per heterozygous P insert, where V(e) is the environmental variance. Mutational variation was greater for the sexes considered separately because inserts had large pleiotropic effects on sex dimorphism of bristle characters. The distributions of homozygous effects of inserts on the bristle traits were asymmetrical, with the largest effects in the direction of reducing bristle number; and highly leptokurtic, with most of the increase in variance contributed by a few lines with large effects. The inserts had partially recessive effects on the bristle traits. Insert lines with extreme bristle effects had on average greatly reduced viability. PMID:1311697

  15. Daily Fluctuation in Negative Affect for Family Caregivers of Individuals With Dementia

    PubMed Central

    Liu, Yin; Kim, Kyungmin; Almeida, David M.; Zarit, Steven H.

    2017-01-01

    Objective The study examined associations of intrinsic fluctuation in daily negative affect (i.e., depression and anger) with adult day service (ADS) use, daily experiences, and other caregiving characteristics. Methods This was an 8-day diary of 173 family caregivers of individuals with dementia. Multilevel models with common within-person variance were fit first to show average associations between daily stressors and mean level of daily affect. Then multilevel models with heterogeneous within-person variance were fit to test the hypotheses on associations between ADS use, daily experiences, and intrinsic fluctuation in daily affect. Results The study showed that, when the sum of ADS days was greater than average, there was a stabilizing effect of ADS use on caregivers’ within-person fluctuation in negative affect. Moreover, fewer daily stressors and greater-than-average daily care-related stressors, more positive events, not being a spouse, greater-than-average duration of caregiving, and less-than-average dependency of individuals with dementia on activities of daily living were associated with less fluctuation. Better sleep quality was associated with less intrinsic fluctuation in anger; and younger age and more years of education were associated with less intrinsic fluctuation in daily depression. Conclusions Because emotional stability has been argued as an aspect of emotional well-being in the general populations, intrinsic fluctuation of emotional experience was suggested as an outcome of evidence-based interventions for family caregivers. PMID:25365414

  16. Observed spatiotemporal variability of boundary-layer turbulence over flat, heterogeneous terrain

    NASA Astrophysics Data System (ADS)

    Maurer, V.; Kalthoff, N.; Wieser, A.; Kohler, M.; Mauder, M.; Gantner, L.

    2016-02-01

    In the spring of 2013, extensive measurements with multiple Doppler lidar systems were performed. The instruments were arranged in a triangle with edge lengths of about 3 km in a moderately flat, agriculturally used terrain in northwestern Germany. For 6 mostly cloud-free convective days, vertical velocity variance profiles were calculated. Weighted-averaged surface fluxes proved to be more appropriate than data from individual sites for scaling the variance profiles; but even then, the scatter of profiles was mostly larger than the statistical error. The scatter could not be explained by mean wind speed or stability, whereas time periods with significantly increased variance contained broader thermals. Periods with an elevated maximum of the variance profiles could also be related to broad thermals. Moreover, statistically significant spatial differences of variance were found. They were not influenced by the existing surface heterogeneity. Instead, thermals were preserved between two sites when the travel time was shorter than the large-eddy turnover time. At the same time, no thermals passed for more than 2 h at a third site that was located perpendicular to the mean wind direction in relation to the first two sites. Organized structures of turbulence with subsidence prevailing in the surroundings of thermals can thus partly explain significant spatial variance differences existing for several hours. Therefore, the representativeness of individual variance profiles derived from measurements at a single site cannot be assumed.

  17. Psychometric properties of the Positive and Negative Affect Schedule (PANAS) in a heterogeneous sample of substance users.

    PubMed

    Serafini, Kelly; Malin-Mayor, Bo; Nich, Charla; Hunkele, Karen; Carroll, Kathleen M

    2016-03-01

    The Positive and Negative Affect Schedule (PANAS) is a widely used measure of affect. A comprehensive psychometric evaluation among substance users, however, has not been published. To examine the psychometric properties of the PANAS in a sample of outpatient treatment substance users. We used pooled data from four randomized clinical trials (N = 416; 34% female, 48% African American). A confirmatory factor analysis indicated adequate support for a two-factor correlated model comprised of Positive Affect and Negative Affect with correlated item errors (Comparative Fit Index = 0.93, Root Mean Square Error of Approximation = 0.07, χ(2) = 478.93, df = 156). Cronbach's α indicated excellent internal consistency for both factors (0.90 and 0.91, respectively). The PANAS factors had good convergence and discriminability (Composite Reliability > 0.7; Maximum Shared Variance < Average Variance Extracted). A comparison from baseline to Week 1 indicated acceptable test-retest reliability (Positive Affect = 0.80, Negative Affect = 0.76). Concurrent and discriminant validity were demonstrated with correlations with the Brief Symptom Inventory and Addiction Severity Index. The PANAS scores were also significantly correlated with treatment outcomes (e.g. Positive Affect was associated with the maximum days of consecutive abstinence from primary substance of abuse, r = 0.16, p = 0.001). Our data suggest that the psychometric properties of the PANAS are retained in substance using populations. Although several studies have focused on the role of Negative Affect, our findings suggest that Positive Affect may also be an important factor in substance use treatment outcomes.

  18. Psychometric properties of the positive and negative affect schedule (PANAS) in a heterogeneous sample of substance users

    PubMed Central

    Serafini, Kelly; Malin-Mayor, Bo; Nich, Charla; Hunkele, Karen; Carroll, Kathleen M.

    2016-01-01

    Background The Positive and Negative Affect Schedule (PANAS) is a widely used measure of affect, and a comprehensive psychometric evaluation has never been conducted among substance users. Objective To examine the psychometric properties of the PANAS in a sample of outpatient treatment substance users. Methods We used pooled data from four randomized clinical trials (N = 416; 34% female, 48% African American). Results A confirmatory factor analysis indicated adequate support for a two-factor correlated model comprised of Positive Affect and Negative Affect with correlated item errors (Comparative Fit Index = .93, Root Mean Square Error of Approximation = .07, χ2 = 478.93, df = 156). Cronbach’s α indicated excellent internal consistency for both factors (.90 and .91, respectively). The PANAS factors had good convergence and discriminability (Composite Reliability >.7; Maximum Shared Variance < Average Variance Extracted). A comparison from baseline to Week 1 indicated acceptable test-retest reliability (Positive Affect = .80, Negative Affect = .76). Concurrent and discriminant validity were demonstrated with correlations with the Brief Symptom Inventory and Addiction Severity Index. The PANAS scores were also significantly correlated with treatment outcomes (e.g., Positive Affect was associated with the maximum days of consecutive abstinence from primary substance of abuse, r = .16, p = .001). Conclusion Our data suggest that the psychometric properties of the PANAS are retained in substance using populations. Although several studies have focused on the role of Negative Affect, our findings suggest that Positive Affect may also be an important factor in substance use treatment outcomes. PMID:26905228

  19. Investigation of noise properties in grating-based x-ray phase tomography with reverse projection method

    NASA Astrophysics Data System (ADS)

    Bao, Yuan; Wang, Yan; Gao, Kun; Wang, Zhi-Li; Zhu, Pei-Ping; Wu, Zi-Yu

    2015-10-01

    The relationship between noise variance and spatial resolution in grating-based x-ray phase computed tomography (PCT) imaging is investigated with reverse projection extraction method, and the noise variances of the reconstructed absorption coefficient and refractive index decrement are compared. For the differential phase contrast method, the noise variance in the differential projection images follows the same inverse-square law with spatial resolution as in conventional absorption-based x-ray imaging projections. However, both theoretical analysis and simulations demonstrate that in PCT the noise variance of the reconstructed refractive index decrement scales with spatial resolution follows an inverse linear relationship at fixed slice thickness, while the noise variance of the reconstructed absorption coefficient conforms with the inverse cubic law. The results indicate that, for the same noise variance level, PCT imaging may enable higher spatial resolution than conventional absorption computed tomography (ACT), while ACT benefits more from degraded spatial resolution. This could be a useful guidance in imaging the inner structure of the sample in higher spatial resolution. Project supported by the National Basic Research Program of China (Grant No. 2012CB825800), the Science Fund for Creative Research Groups, the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant Nos. KJCX2-YW-N42 and Y4545320Y2), the National Natural Science Foundation of China (Grant Nos. 11475170, 11205157, 11305173, 11205189, 11375225, 11321503, 11179004, and U1332109).

  20. Simulation Study Using a New Type of Sample Variance

    NASA Technical Reports Server (NTRS)

    Howe, D. A.; Lainson, K. J.

    1996-01-01

    We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.

  1. Ultrasound-assisted extraction of amino acids from grapes.

    PubMed

    Carrera, Ceferino; Ruiz-Rodríguez, Ana; Palma, Miguel; Barroso, Carmelo G

    2015-01-01

    Recent cultivar techniques on vineyards can have a marked influence on the final nitrogen content of grapes, specifically individual amino acid contents. Furthermore, individual amino acid contents in grapes are related to the final aromatic composition of wines. A new ultrasound-assisted method for the extraction of amino acids from grapes has been developed. Several extraction variables, including solvent (water/ethanol mixtures), solvent pH (2-7), temperature (10-70°C), ultrasonic power (20-70%) and ultrasonic frequency (0.2-1.0s(-)(1)), were optimized to guarantee full recovery of the amino acids from grapes. An experimental design was employed to optimize the extraction parameters. The surface response methodology was used to evaluate the effects of the extraction variables. The analytical properties of the new method were established, including limit of detection (average value 1.4mmolkg(-)(1)), limit of quantification (average value 2.6mmolkg(-)(1)), repeatability (average RSD=12.9%) and reproducibility (average RSD=15.7%). Finally, the new method was applied to three cultivars of white grape throughout the ripening period. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Codifference as a practical tool to measure interdependence

    NASA Astrophysics Data System (ADS)

    Wyłomańska, Agnieszka; Chechkin, Aleksei; Gajda, Janusz; Sokolov, Igor M.

    2015-03-01

    Correlation and spectral analysis represent the standard tools to study interdependence in statistical data. However, for the stochastic processes with heavy-tailed distributions such that the variance diverges, these tools are inadequate. The heavy-tailed processes are ubiquitous in nature and finance. We here discuss codifference as a convenient measure to study statistical interdependence, and we aim to give a short introductory review of its properties. By taking different known stochastic processes as generic examples, we present explicit formulas for their codifferences. We show that for the Gaussian processes codifference is equivalent to covariance. For processes with finite variance these two measures behave similarly with time. For the processes with infinite variance the covariance does not exist, however, the codifference is relevant. We demonstrate the practical importance of the codifference by extracting this function from simulated as well as real data taken from turbulent plasma of fusion device and financial market. We conclude that the codifference serves as a convenient practical tool to study interdependence for stochastic processes with both infinite and finite variances as well.

  3. New microwave-integrated Soxhlet extraction. An advantageous tool for the extraction of lipids from food products.

    PubMed

    Virot, Matthieu; Tomao, Valérie; Colnagui, Giulio; Visinoni, Franco; Chemat, Farid

    2007-12-07

    A new process of Soxhlet extraction assisted by microwave was designed and developed. The process is performed in four steps, which ensures complete, rapid and accurate extraction of the samples. A second-order central composite design (CCD) has been used to investigate the performance of the new device. The results provided by analysis of variance and Pareto chart, indicated that the extraction time was the most important factor followed by the leaching time. The response surface methodology allowed us to determine optimal conditions for olive oil extraction: 13 min of extraction time, 17 min of leaching time, and 720 W of irradiation power. The proposed process is suitable for lipids determination from food. Microwave-integrated Soxhlet (MIS) extraction has been compared with a conventional technique, Soxhlet extraction, for the extraction of oil from olives (Aglandau, Vaucluse, France). The oils extracted by MIS for 32 min were quantitatively (yield) and qualitatively (fatty acid composition) similar to those obtained by conventional Soxhlet extraction for 8 h. MIS is a green technology and appears as a good alternative for the extraction of fat and oils from food products.

  4. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  5. Confirmatory factor analysis of different versions of the Body Shape Questionnaire applied to Brazilian university students.

    PubMed

    da Silva, Wanderson Roberto; Dias, Juliana Chioda Ribeiro; Maroco, João; Campos, Juliana Alvares Duarte Bonini

    2014-09-01

    This study aimed at evaluating the validity, reliability, and factorial invariance of the complete (34-item) and shortened (8-item and 16-item) versions of the Body Shape Questionnaire (BSQ) when applied to Brazilian university students. A total of 739 female students with a mean age of 20.44 (standard deviation=2.45) years participated. Confirmatory factor analysis was conducted to verify the degree to which the one-factor structure satisfies the proposal for the BSQ's expected structure. Two items of the 34-item version were excluded because they had factor weights (λ)<40. All models had adequate convergent validity (average variance extracted=.43-.58; composite reliability=.85-.97) and internal consistency (α=.85-.97). The 8-item B version was considered the best shortened BSQ version (Akaike information criterion=84.07, Bayes information criterion=157.75, Browne-Cudeck criterion=84.46), with strong invariance for independent samples (Δχ(2)λ(7)=5.06, Δχ(2)Cov(8)=5.11, Δχ(2)Res(16)=19.30). Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. High temporal resolution aberrometry in a 50-eye population and implications for adaptive optics error budget.

    PubMed

    Jarosz, Jessica; Mecê, Pedro; Conan, Jean-Marc; Petit, Cyril; Paques, Michel; Meimon, Serge

    2017-04-01

    We formed a database gathering the wavefront aberrations of 50 healthy eyes measured with an original custom-built Shack-Hartmann aberrometer at a temporal frequency of 236 Hz, with 22 lenslets across a 7-mm diameter pupil, for a duration of 20 s. With this database, we draw statistics on the spatial and temporal behavior of the dynamic aberrations of the eye. Dynamic aberrations were studied on a 5-mm diameter pupil and on a 3.4 s sequence between blinks. We noted that, on average, temporal wavefront variance exhibits a n -2 power-law with radial order n and temporal spectra follow a f -1.5 power-law with temporal frequency f . From these statistics, we then extract guidelines for designing an adaptive optics system. For instance, we show the residual wavefront error evolution as a function of the number of corrected modes and of the adaptive optics loop frame rate. In particular, we infer that adaptive optics performance rapidly increases with the loop frequency up to 50 Hz, with gain being more limited at higher rates.

  7. Tree Ring Chronology Indexes and Reconstructions of Precipitation in Central Iowa, USA (1984) (NDP-002)

    DOE Data Explorer

    Blasing, T. J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Duvick, D. N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Building Technologies Research and Integration Center (BTRIC)

    2012-01-01

    Tree core samples (4 mm in diameter) were extracted from the trunks of white oak (Quercus alba) at three sites in central Iowa (Duvick Back Woods, Ledges State Park, and Pammel). At least 60 trees were sampled at each site, and at least two cores were taken from each tree. The growth rings of each core were dated by calendar year and measured; the measurements were then transformed into dimensionless ring-width indices and correlated with annual precipitation. Data were collected for the years 1680 through 1979. Each tree ring was characterized by the site, year, tree-ring-width index, number of core samples, decade year, and the annual reconstructed precipitation estimate. These data have more than 50% of their variance in common with the known annual statewide average precipitation for Iowa and serve as useful indicators of the precipitation and drought history of the region for the past 300 years. The data are in two files: tree-ring-chronology data (8 kB) and the annual reconstructed precipitation data for central Iowa (2 kB).

  8. High temporal resolution aberrometry in a 50-eye population and implications for adaptive optics error budget

    PubMed Central

    Jarosz, Jessica; Mecê, Pedro; Conan, Jean-Marc; Petit, Cyril; Paques, Michel; Meimon, Serge

    2017-01-01

    We formed a database gathering the wavefront aberrations of 50 healthy eyes measured with an original custom-built Shack-Hartmann aberrometer at a temporal frequency of 236 Hz, with 22 lenslets across a 7-mm diameter pupil, for a duration of 20 s. With this database, we draw statistics on the spatial and temporal behavior of the dynamic aberrations of the eye. Dynamic aberrations were studied on a 5-mm diameter pupil and on a 3.4 s sequence between blinks. We noted that, on average, temporal wavefront variance exhibits a n−2 power-law with radial order n and temporal spectra follow a f−1.5 power-law with temporal frequency f. From these statistics, we then extract guidelines for designing an adaptive optics system. For instance, we show the residual wavefront error evolution as a function of the number of corrected modes and of the adaptive optics loop frame rate. In particular, we infer that adaptive optics performance rapidly increases with the loop frequency up to 50 Hz, with gain being more limited at higher rates. PMID:28736657

  9. Immunological variation in Taenia solium porcine cysticercosis: measurement on the variation of the antibody immune response of naturally infected pigs against antigens extracted from their own cysticerci and from those of different pigs.

    PubMed

    Ostoa-Saloma, Pedro; Esquivel-Velázquez, Marcela; Larralde, Carlos

    2013-10-18

    Although it is widely assumed that both antigen and host immunological variability are involved in the variable intensity of natural porcine infections by Taenia solium (T. solium) cysticercis and success of immunodiagnostic tests vaccines, the magnitude of such combined variability has not been studied or measured at all. In this paper we report statistical data on the variability of the antibody response of naturally infected pigs against the antigens extracted from the vesicular fluids of their own infecting cysts (variance within pigs) and against antigen samples extracted from cysts of other cysticercotic pigs (variance among pigs). The variation between pigs was greater than the inter-pigs variations, which suggests that a concomitant immunity process prevents the establishment of cysts coming from a subsequent challenge. In so doing, we found that there is not a single antigenic band that was recognized by all hosts and that antigens varied among the cysts within the same pigs as well as among pigs. Our results may be valuable for the improvement of immunodiagnostic tests and of effective vaccines against naturally acquired porcine T. solium cysticercosis. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Toward privacy-preserving JPEG image retrieval

    NASA Astrophysics Data System (ADS)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  11. Anti-inflammatory activity of aqueous and alkaline extracts from mushrooms (Agaricus blazei Murill).

    PubMed

    Padilha, Marina M; Avila, Ana A L; Sousa, Pergentino J C; Cardoso, Luis Gustavo V; Perazzo, Fábio F; Carvalho, José Carlos T

    2009-04-01

    The effects of aqueous and alkaline extracts from Agaricus blazei Murill, an edible mushroom used as folk medicine in Brazil, Japan, and China to treat several illnesses, were investigated on the basis of the inflammatory process induced by different agents. Oral administration of A. blazei extracts marginally inhibited the edema induced by nystatin. In contrast, when complete Freund's adjuvant was used as the inflammatory stimulus, both extracts were able to inhibit this process significantly (P < .05, analysis of variance followed by Tukey-Kramer multiple comparison post hoc test), although it inhibited the granulomatous tissue induction moderately. These extracts were able to decrease the ulcer wounds induced by stress. Also, administration of extracts inhibited neutrophil migration to the exudates present in the peritoneal cavity after carrageenin injection. Therefore, it is possible that A. blazei extracts can be useful in inflammatory diseases because of activation of the immune system and its cells induced by the presence of polysaccharides such as beta-glucans.

  12. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    NASA Astrophysics Data System (ADS)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  13. Enzymatic production of DFA III from fresh dahlia tubers as raw material

    NASA Astrophysics Data System (ADS)

    Budiwati, Thelma A.; Ratnaningrum, D.; Pudjiraharti, S.

    2017-01-01

    Dahlia is an annual ornamental plants and tubers that have not been widely used in Indonesia. Dahlia tubers contain nearly 70 per cent of the starch in the form of inulin. Inulin addition can be used as a food ingredient can also be used as a raw material for making DFA III (ie functional oligosaccharides), using inulin fructotransferase (IFTase) Nonomuraea sp. In this study conducted production of DFA III through enzymatic reactions and yeast fermentation, using inulin from fresh dahlia tubers and fresh dahlia tuber extract. Dahlia tubers which is one source of inulin, do blanching before extracted. Most dahlia tuber extract used directly for enzymatic reactions in the production of DFA III and some extracts are processed to produce inulin by precipitation using ethanol and then inulin is used for the enzymatic reaction. Syrup DFA III was measured volume and viscosity, and then do decolorization and then crystallization. The analysis was done of Thin Layer Chromatography (to see DFA III formed) and HPLC to see the purity of the product. The results showed that the average of inulin from precipitation with ethanol in the two batch of 113,5 g with an average water content of 7.41%, average whiteness degree 62.29% and an average yield 7.345% (w/w, wb dahlia tuber). From the average of DFA III liquid of 480 mL with density of 14.15%, the result of the average of DFA III crystal from enzyme reaction in the two reactor using inulin dahlia tubers as a substrate, was obtained of 55.4 g with an average whiteness degree of 93.8%, and the average of yield 3.56% w/w (wb dahlia tuber) or 48.89% w/w (db inulin). And then from the average of 475 mL with density of 16.85% was obtained an average DFA III crystals of 29 g from the enzyme reaction in the two reactor using fresh dahlia tuber extract as a substrate, with an average whiteness degree o 80.75% and the average of the yield of 1.86% w/w (wb dahlia tuber).

  14. Evaluation of the influence of white grape seed extracts as copigment sources on the anthocyanin extraction from grape skins previously classified by near infrared hyperspectral tools.

    PubMed

    Nogales-Bueno, Julio; Baca-Bocanegra, Berta; Jara-Palacios, María José; Hernández-Hierro, José Miguel; Heredia, Francisco José

    2017-04-15

    Hyperspectral imaging has been used to classify red grapes (Vitis vinifera L.) according to their predicted extractable total anthocyanin content (i.e. extractable total anthocyanin content determined by a hyperspectral method). Low, medium and high levels of predicted extractable total anthocyanin content were established. Then, grape skins were split into three parts and each part was macerated into a different model wine solution for a three-day period. Wine model solutions were made up with different concentration of copigments coming from white grape seeds. Aqueous supernatants were analyzed by HPLC-DAD and extractable anthocyanin contents were obtained. Principal component analyses and analyses of variance were carried out with the aim of studying trends related to the extractable anthocyanin contents. Significant differences were found among grapes with different levels of predicted extractable anthocyanin contents. Moreover, no significant differences were found on the extractable anthocyanin contents using different copigment concentrations in grape skin macerations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Comparison of solvent extraction and solid-phase extraction for the determination of polychlorinated biphenyls in transformer oil.

    PubMed

    Mahindrakar, A N; Chandra, S; Shinde, L P

    2014-01-01

    Solid-phase extraction (SPE) of nine polychlorinated biphenyls (PCBs) from transformer oil samples was evaluated using octadecyl (CI8)-bonded porous silica. The efficiency of SPE of these PCBs was compared with those obtained by solvent extraction with DMSO and hexane. Average recoveries exceeding 95% for these PCBs were obtained via the SPE method using small cartridges containing 100mg of 40 pm CI8-bonded porous silica. The average recovery by solvent extraction with DMSO and hexane exceeded 83%. It was concluded that the recoveries and precision for the solvent extraction of PCBs were poorer than those for the SPE. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Optimization of brain PET imaging for a multicentre trial: the French CATI experience.

    PubMed

    Habert, Marie-Odile; Marie, Sullivan; Bertin, Hugo; Reynal, Moana; Martini, Jean-Baptiste; Diallo, Mamadou; Kas, Aurélie; Trébossen, Régine

    2016-12-01

    CATI is a French initiative launched in 2010 to handle the neuroimaging of a large cohort of subjects recruited for an Alzheimer's research program called MEMENTO. This paper presents our test protocol and results obtained for the 22 PET centres (overall 13 different scanners) involved in the MEMENTO cohort. We determined acquisition parameters using phantom experiments prior to patient studies, with the aim of optimizing PET quantitative values to the highest possible per site, while reducing, if possible, variability across centres. Jaszczak's and 3D-Hoffman's phantom measurements were used to assess image spatial resolution (ISR), recovery coefficients (RC) in hot and cold spheres, and signal-to-noise ratio (SNR). For each centre, the optimal reconstruction parameters were chosen as those maximizing ISR and RC without a noticeable decrease in SNR. Point-spread-function (PSF) modelling reconstructions were discarded. The three figures of merit extracted from the images reconstructed with optimized parameters and routine schemes were compared, as were volumes of interest ratios extracted from Hoffman acquisitions. The net effect of the 3D-OSEM reconstruction parameter optimization was investigated on a subset of 18 scanners without PSF modelling reconstruction. Compared to the routine parameters of the 22 PET centres, average RC in the two smallest hot and cold spheres and average ISR remained stable or were improved with the optimized reconstruction, at the expense of slight SNR degradation, while the dispersion of values was reduced. For the subset of scanners without PSF modelling, the mean RC of the smallest hot sphere obtained with the optimized reconstruction was significantly higher than with routine reconstruction. The putamen and caudate-to-white matter ratios measured on 3D-Hoffman acquisitions of all centres were also significantly improved by the optimization, while the variance was reduced. This study provides guidelines for optimizing quantitative results for multicentric PET neuroimaging trials.

  17. Psychometric Properties of the Persian Language Version of Yang Internet Addiction Questionnaire: An Explanatory Factor Analysis.

    PubMed

    Mohammadsalehi, Narges; Mohammadbeigi, Abolfazl; Jadidi, Rahmatollah; Anbari, Zohreh; Ghaderi, Ebrahim; Akbari, Mojtaba

    2015-09-01

    Reliability and validity are the key concepts in measurement processes. Young internet addiction test (YIAT) is regarded as a valid and reliable questionnaire in English speaking countries for diagnosis of Internet-related behavior disorders. This study aimed at validating the Persian version of YIAT in the Iranian society. A pilot and a cross-sectional study were conducted on 28 and 254 students of Qom University of Medical Sciences, respectively, in order to validate the Persian version of YIAT. Forward and backward translations were conducted to develop a Persian version of the scale. Reliability was measured by test-retest, Cronbach's alpha and interclass correlation coefficient (ICC). Face, content and construct validity were approved by the importance score index, content validity ratio (CVR), content validity index (CVI), correlation matrix and factor analysis. The SPSS software was used for data analysis. The Cronbach's alpha was 0.917 (CI 95%; 0.901 - 0.931). The average of scale-level CVI was calculated to be 0.74; the CVI index for each item was higher than 0.83 and the average of CVI index was equal to 0.89. Factor analysis extracted three factors including personal activities disorder (PAD), emotional and mood disorder (EMD) and social activities disorder (SAD), with more than 55.8% of total variances. The ICC for different factors of Persian version of Young Questionnaire including PAD, EMD and for SAD was r = 0.884; CI 95%; 0.861 - 0.904, r = 0.766; CI 95%; 0.718 - 0.808 and r = 0.745; CI 95%; 0.686 - 0.795, respectively. Our study showed that the Persian version of YIAT is good and usable on Iranian people. The reliability of the instrument was very good. Moreover, the validity of the Persian translated version of the scale was sufficient. In addition, the reliability and validity of the three extracted factors of YIAT were evaluated and were acceptable.

  18. Psychometric Properties of the Persian Language Version of Yang Internet Addiction Questionnaire: An Explanatory Factor Analysis

    PubMed Central

    Mohammadsalehi, Narges; Mohammadbeigi, Abolfazl; Jadidi, Rahmatollah; Anbari, Zohreh; Ghaderi, Ebrahim; Akbari, Mojtaba

    2015-01-01

    Background: Reliability and validity are the key concepts in measurement processes. Young internet addiction test (YIAT) is regarded as a valid and reliable questionnaire in English speaking countries for diagnosis of Internet-related behavior disorders. Objectives: This study aimed at validating the Persian version of YIAT in the Iranian society. Patients and Methods: A pilot and a cross-sectional study were conducted on 28 and 254 students of Qom University of Medical Sciences, respectively, in order to validate the Persian version of YIAT. Forward and backward translations were conducted to develop a Persian version of the scale. Reliability was measured by test-retest, Cronbach’s alpha and interclass correlation coefficient (ICC). Face, content and construct validity were approved by the importance score index, content validity ratio (CVR), content validity index (CVI), correlation matrix and factor analysis. The SPSS software was used for data analysis. Results: The Cronbach’s alpha was 0.917 (CI 95%; 0.901 - 0.931). The average of scale-level CVI was calculated to be 0.74; the CVI index for each item was higher than 0.83 and the average of CVI index was equal to 0.89. Factor analysis extracted three factors including personal activities disorder (PAD), emotional and mood disorder (EMD) and social activities disorder (SAD), with more than 55.8% of total variances. The ICC for different factors of Persian version of Young Questionnaire including PAD, EMD and for SAD was r = 0.884; CI 95%; 0.861 - 0.904, r = 0.766; CI 95%; 0.718 - 0.808 and r = 0.745; CI 95%; 0.686 - 0.795, respectively. Conclusions: Our study showed that the Persian version of YIAT is good and usable on Iranian people. The reliability of the instrument was very good. Moreover, the validity of the Persian translated version of the scale was sufficient. In addition, the reliability and validity of the three extracted factors of YIAT were evaluated and were acceptable. PMID:26495253

  19. Some Questions Concerning the Standards of External Examinations.

    ERIC Educational Resources Information Center

    Kahn, Michael J.

    1990-01-01

    Variance as a function of time is described for the Cambridge Local Examinations Syndicate's examination standards, with emphasis on the performance of candidates from Botswana and Zimbabwe. Results demonstrate the value of simple linear modeling in extracting performance trends for a range of subjects over time across six countries. (TJH)

  20. A Large-Scale Analysis of Variance in Written Language

    ERIC Educational Resources Information Center

    Johns, Brendan T.; Jamieson, Randall K.

    2018-01-01

    The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language (e.g., Griffiths, Steyvers,…

  1. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    PubMed

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  2. An Algorithm Framework for Isolating Anomalous Signals in Electromagnetic Data

    NASA Astrophysics Data System (ADS)

    Kappler, K. N.; Schneider, D.; Bleier, T.; MacLean, L. S.

    2016-12-01

    QuakeFinder and its international collaborators have installed and currently maintain an array of 165 three-axis induction magnetometer instrument sites in California, Peru, Taiwan, Greece, Chile and Sumatra. Based on research by Bleier et al. (2009), Fraser-Smith et al. (1990), and Freund (2007), the electromagnetic data from these instruments are being analyzed for pre-earthquake signatures. This analysis consists of both private research by QuakeFinder, and institutional collaborators (PUCP in Peru, NCU in Taiwan, NOA in Greece, LASP at University of Colorado, Stanford, UCLA, NASA-ESI, NASA-AMES and USC-CSEP). QuakeFinder has developed an algorithm framework aimed at isolating anomalous signals (pulses) in the time series. Results are presented from an application of this framework to induction-coil magnetometer data. Our data driven approach starts with sliding windows applied to uniformly resampled array data with a variety of lengths and overlap. Data variance (a proxy for energy) is calculated on each window and a short-term average/ long-term average (STA/LTA) filter is applied to the variance time series. Pulse identification is done by flagging time intervals in the STA/LTA filtered time series which exceed a threshold. Flagged time intervals are subsequently fed into a feature extraction program which computes statistical properties of the resampled data. These features are then filtered using a Principal Component Analysis (PCA) based method to cluster similar pulses. We explore the extent to which this approach categorizes pulses with known sources (e.g. cars, lightning, etc.) and the remaining pulses of unknown origin can be analyzed with respect to their relationship with seismicity. We seek a correlation between these daily pulse-counts (with known sources removed) and subsequent (days to weeks) seismic events greater than M5 within 15km radius. Thus we explore functions which map daily pulse-counts to a time series representing the likelihood of a seismic event occurring at some future time. These "pseudo-probabilities" can in turn be represented as Molchan diagrams. The Molchan curve provides an effective cost function for optimization and allows for a rigorous statistical assessment of the validity of pre-earthquake signals in the electromagnetic data.

  3. A Quantitative Evaluation of SCEC Community Velocity Model Version 3.0

    NASA Astrophysics Data System (ADS)

    Chen, P.; Zhao, L.; Jordan, T. H.

    2003-12-01

    We present a systematic methodology for evaluating and improving 3D seismic velocity models using broadband waveform data from regional earthquakes. The operator that maps a synthetic waveform into an observed waveform is expressed in the Rytov form D(ω ) = {exp}[{i} ω δ τ {p}(ω ) - ω δ τ {q}(ω )]. We measure the phase delay time δ τ p(ω ) and the amplitude reduction time δ τ q(ω ) as a function of frequency ω using Gee & Jordan's [1992] isolation-filter technique, and we correct the data for frequency-dependent interference and frequency-independent source statics. We have applied this procedure to a set of small events in Southern California. Synthetic seismograms were computed using three types of velocity models: the 1D Standard Southern California Crustal Model (SoCaL) [Dreger & Helmberger, 1993], the 3D SCEC Community Velocity Model, Version 3.0 (CVM3.0) [Magistrale et al., 2000], and a set of path-averaged 1D models (A1D) extracted from CVM3.0 by horizontally averaging wave slownesses along source-receiver paths. The 3D synthetics were computed using K. Olsen's finite difference code. More than 1000 measurements were made on both P and S waveforms at frequencies ranging from 0.2 to 1 Hz. Overall, the 3D model provided a substantially better fit to the waveform data than either laterally homogeneous or path-dependent 1D models. Relative to SoCaL, CVM3.0 provided a variance reduction of about 64% in δ τ p, and 41% in δ τ q. Relative to A1D, the variance reduction is about 46% and 20%, respectively. The same set of measurements can be employed to invert for both seismic source properties and seismic velocity structures. Fully numerical methods are being developed to compute the Fréchet kernels for these measurements [L. Zhao et. al., this meeting]. This methodology thus provides a unified framework for regional studies of seismic sources and Earth structure in Southern California and elsewhere.

  4. Genomic estimation of additive and dominance effects and impact of accounting for dominance on accuracy of genomic evaluation in sheep populations.

    PubMed

    Moghaddar, N; van der Werf, J H J

    2017-12-01

    The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasnobaeva, L. A., E-mail: kla1983@mail.ru; Siberian State Medical University Moscowski Trakt 2, Tomsk, 634050; Shapovalov, A. V.

    Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on dynamics local conformational perturbations (kink) propagating along the DNA molecule is investigated. Such waves have an important role in the regulation of important biological processes in living systems at the molecular level. As a dynamic model of DNA was used a modified sine-Gordon equation, simulating the rotational oscillations of bases in one of the chains DNA. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the frameworkmore » of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker– Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum. Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on the kink dynamics is investigated in the sine–Gordon model. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the framework of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker–Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum.« less

  6. Instantaneous variance scaling of AIRS thermodynamic profiles using a circular area Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.

    2018-05-01

    Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.

  7. Variance in prey abundance influences time budgets of breeding seabirds: Evidence from pigeon guillemots Cepphus columba

    USGS Publications Warehouse

    Litzow, Michael A.; Piatt, John F.

    2003-01-01

    We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.

  8. Multi range spectral feature fitting for hyperspectral imagery in extracting oilseed rape planting area

    NASA Astrophysics Data System (ADS)

    Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin

    2013-12-01

    Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.

  9. Predicting sugar-sweetened behaviours with theory of planned behaviour constructs: Outcome and process results from the SIPsmartER behavioural intervention.

    PubMed

    Zoellner, Jamie M; Porter, Kathleen J; Chen, Yvonnes; Hedrick, Valisa E; You, Wen; Hickman, Maja; Estabrooks, Paul A

    2017-05-01

    Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13-20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6-38%) and behaviour (average 30%, range 6-55%) were significant. Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases.

  10. Hydraulic geometry of river cross sections; theory of minimum variance

    USGS Publications Warehouse

    Williams, Garnett P.

    1978-01-01

    This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)

  11. [Path analysis of lifestyle habits to the metabolic syndrome].

    PubMed

    Zhu, Zhen-xin; Zhang, Cheng-qi; Tang, Fang; Song, Xin-hong; Xue, Fu-zhong

    2013-04-01

    To evaluate the relationship between lifestyle habits and the components of metabolic syndrome (MS). Based on the routine health check-up system in a certain Center for Health Management of Shandong Province, a longitudinal surveillance health check-up cohort from 2005 to 2010 was set up. There were 13 225 urban workers in Jinan included in the analysis. The content of the survey included demographic information, medical history, lifestyle habits, body mass index (BMI) and the level of blood pressure, fasting blood-glucose, and blood lipid, etc. The distribution of BMI, blood pressure, fasting blood-glucose, blood lipid and lifestyle habits between MS patients and non-MS population was compared, latent variables were extracted by exploratory factor analysis to determine the structure model, and then a partial least squares path model was constructed between lifestyle habits and the components of MS. Participants'age was (46.62 ± 12.16) years old. The overall prevalence of the MS was 22.43% (2967/13 225), 26.49% (2535/9570) in males and 11.82% (432/3655) in females. The prevalence of the MS was statistically different between males and females (χ(2) = 327.08, P < 0.01). Between MS patients and non-MS population, the difference of dietary habits was statistically significant (χ(2) = 166.31, P < 0.01) in MS patients, the rate of vegetarian, mixed and animal food was 23.39% (694/2967), 42.50% (1261/2967) and 34.11% (1012/2967) respectively, while in non-MS population was 30.80% (3159/10 258), 46.37% (4757/10 258), 22.83% (2342/10 258) respectively. Their alcohol consumption has statistical difference (χ(2) = 374.22, P < 0.01) in MS patients, the rate of never or past, occasional and regular drinking was 27.37% (812/2967), 24.71% (733/2967), 47.93% (1422/2967) respectively, and in non-MS population was 39.60% (4062/10 258), 31.36% (3217/10 258), 29.04% (2979/10 258) respectively. The difference of their smoking status was statistically significant (χ(2) = 115.86, P < 0.01) in MS patients, the rate of never or past, occasional and regular smoking was 59.72% (1772/2967), 6.24% (185/2967), 34.04% (1010/2967) respectively, while in non-MS population was 70.03% (7184/10 258), 5.35% (549/10 258), 24.61% (2525/10 258) respectively. Both lifestyle habits and the components of MS were attributable to only one latent variable. After adjustment for age and gender, the path coefficient between the latent component of lifestyle habits and the latent component of MS was 0.22 with statistical significance (t = 6.46, P < 0.01) through bootstrap test. Reliability and validity of the model:the lifestyle latent variable: average variance extracted was 0.53, composite reliability was 0.77 and Cronbach's a was 0.57. The MS latent variable: average variance extracted was 0.45, composite reliability was 0.76 and Cronbach's a was 0.59. Unhealthy lifestyle habits are closely related to MS. Meat diet, excessive drinking and smoking are risk factors for MS.

  12. An Empirical Study of Design Parameters for Assessing Differential Impacts for Students in Group Randomized Trials.

    PubMed

    Jaciw, Andrew P; Lin, Li; Ma, Boya

    2016-10-18

    Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts. This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters. Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs. Relative sensitivities to detect average and differential impacts are also examined. Student outcomes from six CRTs are analyzed. Achievement in math, science, reading, and writing. The ratio of between-cluster variation in the slope of the moderator divided by total variance-the "moderator gap variance ratio"-is important for designing studies to detect differences in impact between student subgroups. This quantity is the analogue of the intraclass correlation coefficient. Typical values were .02 for gender and .04 for socioeconomic status. For studies considered, in many cases estimates of differential impact were larger than of average impact, and after conditioning on effects of covariates, similar power was achieved for detecting average and differential impacts of the same size. Measuring differential impacts is important for addressing questions of equity, generalizability, and guiding interpretation of subgroup impact findings. Adequate power for doing this is in some cases reachable with CRTs designed to measure average impacts. Continuing collection of parameters for assessing differential impacts is the next step. © The Author(s) 2016.

  13. Variance components estimation for continuous and discrete data, with emphasis on cross-classified sampling designs

    USGS Publications Warehouse

    Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).

  14. A Method of Time-Intensity Curve Calculation for Vascular Perfusion of Uterine Fibroids Based on Subtraction Imaging with Motion Correction

    NASA Astrophysics Data System (ADS)

    Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming

    2016-12-01

    The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.

  15. Three Averaging Techniques for Reduction of Antenna Temperature Variance Measured by a Dicke Mode, C-Band Radiometer

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Lawrence, Roland W.

    2000-01-01

    As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.

  16. Micro-Droplet Detection Method for Measuring the Concentration of Alkaline Phosphatase-Labeled Nanoparticles in Fluorescence Microscopy

    PubMed Central

    Li, Rufeng; Wang, Yibei; Xu, Hong; Fei, Baowei; Qin, Binjie

    2017-01-01

    This paper developed and evaluated a quantitative image analysis method to measure the concentration of the nanoparticles on which alkaline phosphatase (AP) was immobilized. These AP-labeled nanoparticles are widely used as signal markers for tagging biomolecules at nanometer and sub-nanometer scales. The AP-labeled nanoparticle concentration measurement can then be directly used to quantitatively analyze the biomolecular concentration. Micro-droplets are mono-dispersed micro-reactors that can be used to encapsulate and detect AP-labeled nanoparticles. Micro-droplets include both empty micro-droplets and fluorescent micro-droplets, while fluorescent micro-droplets are generated from the fluorescence reaction between the APs adhering to a single nanoparticle and corresponding fluorogenic substrates within droplets. By detecting micro-droplets and calculating the proportion of fluorescent micro-droplets to the overall micro-droplets, we can calculate the AP-labeled nanoparticle concentration. The proposed micro-droplet detection method includes the following steps: (1) Gaussian filtering to remove the noise of overall fluorescent targets, (2) a contrast-limited, adaptive histogram equalization processing to enhance the contrast of weakly luminescent micro-droplets, (3) an red maximizing inter-class variance thresholding method (OTSU) to segment the enhanced image for getting the binary map of the overall micro-droplets, (4) a circular Hough transform (CHT) method to detect overall micro-droplets and (5) an intensity-mean-based thresholding segmentation method to extract the fluorescent micro-droplets. The experimental results of fluorescent micro-droplet images show that the average accuracy of our micro-droplet detection method is 0.9586; the average true positive rate is 0.9502; and the average false positive rate is 0.0073. The detection method can be successfully applied to measure AP-labeled nanoparticle concentration in fluorescence microscopy. PMID:29160812

  17. Micro-Droplet Detection Method for Measuring the Concentration of Alkaline Phosphatase-Labeled Nanoparticles in Fluorescence Microscopy.

    PubMed

    Li, Rufeng; Wang, Yibei; Xu, Hong; Fei, Baowei; Qin, Binjie

    2017-11-21

    This paper developed and evaluated a quantitative image analysis method to measure the concentration of the nanoparticles on which alkaline phosphatase (AP) was immobilized. These AP-labeled nanoparticles are widely used as signal markers for tagging biomolecules at nanometer and sub-nanometer scales. The AP-labeled nanoparticle concentration measurement can then be directly used to quantitatively analyze the biomolecular concentration. Micro-droplets are mono-dispersed micro-reactors that can be used to encapsulate and detect AP-labeled nanoparticles. Micro-droplets include both empty micro-droplets and fluorescent micro-droplets, while fluorescent micro-droplets are generated from the fluorescence reaction between the APs adhering to a single nanoparticle and corresponding fluorogenic substrates within droplets. By detecting micro-droplets and calculating the proportion of fluorescent micro-droplets to the overall micro-droplets, we can calculate the AP-labeled nanoparticle concentration. The proposed micro-droplet detection method includes the following steps: (1) Gaussian filtering to remove the noise of overall fluorescent targets, (2) a contrast-limited, adaptive histogram equalization processing to enhance the contrast of weakly luminescent micro-droplets, (3) an red maximizing inter-class variance thresholding method (OTSU) to segment the enhanced image for getting the binary map of the overall micro-droplets, (4) a circular Hough transform (CHT) method to detect overall micro-droplets and (5) an intensity-mean-based thresholding segmentation method to extract the fluorescent micro-droplets. The experimental results of fluorescent micro-droplet images show that the average accuracy of our micro-droplet detection method is 0.9586; the average true positive rate is 0.9502; and the average false positive rate is 0.0073. The detection method can be successfully applied to measure AP-labeled nanoparticle concentration in fluorescence microscopy.

  18. Faculty and resident evaluations of medical students on a surgery clerkship correlate poorly with standardized exam scores.

    PubMed

    Goldstein, Seth D; Lindeman, Brenessa; Colbert-Getz, Jorie; Arbella, Trisha; Dudas, Robert; Lidor, Anne; Sacks, Bethany

    2014-02-01

    The clinical knowledge of medical students on a surgery clerkship is routinely assessed via subjective evaluations from faculty members and residents. Interpretation of these ratings should ideally be valid and reliable. However, prior literature has questioned the correlation between subjective and objective components when assessing students' clinical knowledge. Retrospective cross-sectional data were collected from medical student records at The Johns Hopkins University School of Medicine from July 2009 through June 2011. Surgical faculty members and residents rated students' clinical knowledge on a 5-point, Likert-type scale. Interrater reliability was assessed using intraclass correlation coefficients for students with ≥4 attending surgeon evaluations (n = 216) and ≥4 resident evaluations (n = 207). Convergent validity was assessed by correlating average evaluation ratings with scores on the National Board of Medical Examiners (NBME) clinical subject examination for surgery. Average resident and attending surgeon ratings were also compared by NBME quartile using analysis of variance. There were high degrees of reliability for resident ratings (intraclass correlation coefficient, .81) and attending surgeon ratings (intraclass correlation coefficient, .76). Resident and attending surgeon ratings shared a moderate degree of variance (19%). However, average resident ratings and average attending surgeon ratings shared a small degree of variance with NBME surgery examination scores (ρ(2) ≤ .09). When ratings were compared among NBME quartile groups, the only significant difference was for residents' ratings of students with the lower 25th percentile of scores compared with the top 25th percentile of scores (P = .007). Although high interrater reliability suggests that attending surgeons and residents rate students with consistency, the lack of convergent validity suggests that these ratings may not be reflective of actual clinical knowledge. Both faculty members and residents may benefit from training in knowledge assessment, which will likely increase opportunities to recognize deficiencies and make student evaluation a more valuable tool. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Classification of spatially unresolved objects

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Horwitz, H. M.; Hyde, P. D.; Morgenstern, J. P.

    1972-01-01

    A proportion estimation technique for classification of multispectral scanner images is reported that uses data point averaging to extract and compute estimated proportions for a single average data point to classify spatial unresolved areas. Example extraction calculations of spectral signatures for bare soil, weeds, alfalfa, and barley prove quite accurate.

  20. Central composite rotatable design for investigation of microwave-assisted extraction of okra pod hydrocolloid.

    PubMed

    Samavati, Vahid

    2013-10-01

    Microwave-assisted extraction (MAE) technique was employed to extract the hydrocolloid from okra pods (OPH). The optimal conditions for microwave-assisted extraction of OPH were determined by response surface methodology. A central composite rotatable design (CCRD) was applied to evaluate the effects of three independent variables (microwave power (X1: 100-500 W), extraction time (X2: 30-90 min), and extraction temperature (X3: 40-90 °C)) on the extraction yield of OPH. The correlation analysis of the mathematical-regression model indicated that quadratic polynomial model could be employed to optimize the microwave extraction of OPH. The optimal conditions to obtain the highest recovery of OPH (14.911±0.27%) were as follows: microwave power, 395.56 W; extraction time, 67.11 min and extraction temperature, 73.33 °C. Under these optimal conditions, the experimental values agreed with the predicted ones by analysis of variance. It indicated high fitness of the model used and the success of response surface methodology for optimizing OPH extraction. After method development, the DPPH radical scavenging activity of the OPH was evaluated. MAE showed obvious advantages in terms of high extraction efficiency and radical scavenging activity of extract within the shorter extraction time. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Effects of the coupling strength of a voltage probe on the conductance coefficients in a three-lead microstructure

    NASA Astrophysics Data System (ADS)

    Iida, S.

    1991-03-01

    Using statistical scattering theory, we calculate the average and the variance of the conductance coefficients at zero temperature for a small disordered metallic wire composed of three arms. Each arm is coupled at the end to a perfectly conducting lead. The disorder is modeled by a microscopic random Hamiltonian belonging to the Gaussian orthogonal ensemble. As the coupling strength of the third arm (voltage probe) is increased, the variance of the conductance coefficient of the main track changes from the universal value of the two-lead geometry to that of the three-lead geometry. The variance of the resistance coefficient is strongly affected by the coupling strength of the arm whose resistance is being measured and has a relatively weak dependence on those of the other two arms.

  2. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  3. Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Barker, W. Howard

    2004-07-01

    The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.

  4. A research of road centerline extraction algorithm from high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yushan; Xu, Tingfa

    2017-09-01

    Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.

  5. Spectral analysis of the Earth's topographic potential via 2D-DFT: a new data-based degree variance model to degree 90,000

    NASA Astrophysics Data System (ADS)

    Rexer, Moritz; Hirt, Christian

    2015-09-01

    Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.

  6. Including Both Time and Accuracy in Defining Text Search Efficiency.

    ERIC Educational Resources Information Center

    Symons, Sonya; Specht, Jacqueline A.

    1994-01-01

    Examines factors related to efficiency in a textbook search task. Finds that time and accuracy involved distinct processes and that accuracy was related to verbal competence. Finds further that measures of planning and extracting information accounted for 59% of the variance in search efficiency. Suggests that both accuracy and rate need to be…

  7. Multivariate analysis of variance of designed chromatographic data. A case study involving fermentation of rooibos tea.

    PubMed

    Marini, Federico; de Beer, Dalene; Walters, Nico A; de Villiers, André; Joubert, Elizabeth; Walczak, Beata

    2017-03-17

    An ultimate goal of investigations of rooibos plant material subjected to different stages of fermentation is to identify the chemical changes taking place in the phenolic composition, using an untargeted approach and chromatographic fingerprints. Realization of this goal requires, among others, identification of the main components of the plant material involved in chemical reactions during the fermentation process. Quantitative chromatographic data for the compounds for extracts of green, semi-fermented and fermented rooibos form the basis of preliminary study following a targeted approach. The aim is to estimate whether treatment has a significant effect based on all quantified compounds and to identify the compounds, which contribute significantly to it. Analysis of variance is performed using modern multivariate methods such as ANOVA-Simultaneous Component Analysis, ANOVA - Target Projection and regularized MANOVA. This study is the first one in which all three approaches are compared and evaluated. For the data studied, all tree methods reveal the same significance of the fermentation effect on the extract compositions, but they lead to its different interpretation. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Applications of physics to economics and finance: Money, income, wealth, and the stock market

    NASA Astrophysics Data System (ADS)

    Dragulescu, Adrian Antoniu

    Several problems arising in Economics and Finance are analyzed using concepts and quantitative methods from Physics. The dissertation is organized as follows: In the first chapter it is argued that in a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money must follow the exponential Boltzmann-Gibbs law characterized by an effective temperature equal to the average amount of money per economic agent. The emergence of Boltzmann-Gibbs distribution is demonstrated through computer simulations of economic models. A thermal machine which extracts a monetary profit can be constructed between two economic systems with different temperatures. The role of debt and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold, are discussed. In the second chapter, using data from several sources, it is found that the distribution of income is described for the great majority of population by an exponential distribution, whereas the high-end tail follows a power law. From the individual income distribution, the probability distribution of income for families with two earners is derived and it is shown that it also agrees well with the data. Data on wealth is presented and it is found that the distribution of wealth has a structure similar to the distribution of income. The Lorenz curve and Gini coefficient were calculated and are shown to be in good agreement with both income and wealth data sets. In the third chapter, the stock-market fluctuations at different time scales are investigated. A model where stock-price dynamics is governed by a geometrical (multiplicative) Brownian motion with stochastic variance is proposed. The corresponding Fokker-Planck equation can be solved exactly. Integrating out the variance, an analytic formula for the time-dependent probability distribution of stock price changes (returns) is found. The formula is in excellent agreement with the Dow-Jones index for the time lags from 1 to 250 trading days. For time lags longer than the relaxation time of variance, the probability distribution can be expressed in a scaling form using a Bessel function. The Dow-Jones data follow the scaling function for seven orders of magnitude.

  9. Inactivation disinfection property of Moringa Oleifera seed extract: optimization and kinetic studies

    NASA Astrophysics Data System (ADS)

    Idris, M. A.; Jami, M. S.; Hammed, A. M.

    2017-05-01

    This paper presents the statistical optimization study of disinfection inactivation parameters of defatted Moringa oleifera seed extract on Pseudomonas aeruginosa bacterial cells. Three level factorial design was used to estimate the optimum range and the kinetics of the inactivation process was also carried. The inactivation process involved comparing different disinfection models of Chicks-Watson, Collins-Selleck and Homs models. The results from analysis of variance (ANOVA) of the statistical optimization process revealed that only contact time was significant. The optimum disinfection range of the seed extract was 125 mg/L, 30 minutes and 120rpm agitation. At the optimum dose, the inactivation kinetics followed the Collin-Selleck model with coefficient of determination (R2) of 0.6320. This study is the first of its kind in determining the inactivation kinetics of pseudomonas aeruginosa using the defatted seed extract.

  10. Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum

    NASA Astrophysics Data System (ADS)

    Guan, Shan; Song, Weijie; Pang, Hongyang

    2017-09-01

    In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.

  11. The influence of microtopography on soil nutrients in created mitigation wetlands

    USGS Publications Warehouse

    Moser, K.F.; Ahn, C.; Noe, G.B.

    2009-01-01

    This study explores the relationship between microtopography and soil nutrients (and trace elements), comparing results for created and reference wetlands in Virginia, and examining the effects of disking during wetland creation. Replicate multiscale tangentially conjoined circular transects were used to quantify microtopography both in terms of elevation and by two microtopographic indices. Corresponding soil samples were analyzed for moisture content, total C and N, KCl-extractable NH4-N and NO3-N, and Mehlich-3 extractable P, Ca, Mg, K, Al, Fe, and Mn. Means and variances of soil nutrient/element concentrations were compared between created and natural wetlands and between disked and nondisked created wetlands. Natural sites had higher and more variable soil moisture, higher extractable P and Fe, lower Mn than created wetlands, and comparatively high variability in nutrient concentrations. Disked sites had higher soil moisture, NH4-N, Fe, and Mn than did nondisked sites. Consistently low variances (Levene test for inequality) suggested that nondisked sites had minimal nutrient heterogeneity. Across sites, low P availability was inferred by the molar ratio (Mehlich-3 [P/(Al + Fe)] < 0.06); strong intercorrelations among total C, total N, and extractable Fe, Al, and P suggested that humic-metal-P complexes may be important for P retention and availability. Correlations between nutrient/element concentrations and microtopographic indices suggested increased Mn and decreased K and Al availability with increased surface roughness. Disking appears to enhance water and nutrient retention, as well as nutrient heterogeneity otherwise absent from created wetlands, thus potentially promoting ecosystem development. ?? 2008 Society for Ecological Restoration International.

  12. Determination of the organic aerosol mass to organic carbon ratio in IMPROVE samples.

    PubMed

    El-Zanan, Hazem S; Lowenthal, Douglas H; Zielinska, Barbara; Chow, Judith C; Kumar, Naresh

    2005-07-01

    The ratio of organic mass (OM) to organic carbon (OC) in PM(2.5) aerosols at US national parks in the IMPROVE network was estimated experimentally from solvent extraction of sample filters and from the difference between PM(2.5) mass and chemical constituents other than OC (mass balance) in IMPROVE samples from 1988 to 2003. Archived IMPROVE filters from five IMPROVE sites were extracted with dichloromethane (DCM), acetone and water. The extract residues were weighed to determine OM and analyzed for OC by thermal optical reflectance (TOR). On average, successive extracts of DCM, acetone, and water contained 64%, 21%, and 15%, respectively, of the extractable OC, respectively. On average, the non-blank-corrected recovery of the OC initially measured in these samples by TOR was 115+/-42%. OM/OC ratios from the combined DCM and acetone extracts averaged 1.92 and ranged from 1.58 at Indian Gardens, AZ in the Grand Canyon to 2.58 at Mount Rainier, WA. The average OM/OC ratio determined by mass balance was 2.07 across the IMPROVE network. The sensitivity of this ratio to assumptions concerning sulfate neutralization, water uptake by hygroscopic species, soil mass, and nitrate volatilization were evaluated. These results suggest that the value of 1.4 for the OM/OC ratio commonly used for mass and light extinction reconstruction in IMPROVE is too low.

  13. Methodological challenges in assessment of current use of warfarin among patients with atrial fibrillation using dispensation data from administrative health care databases.

    PubMed

    Sinyavskaya, Liliya; Matteau, Alexis; Johnson, Sarasa; Durand, Madeleine

    2018-06-05

    Algorithms to define current exposure to warfarin using administrative data may be imprecise. Study objectives were to characterize dispensation patterns, to measure gaps between expected and observed refill dates for warfarin and direct oral anticoagulants (DOACs). Retrospective cohort study using administrative health care databases of the Régie de l'assurance-maladie du Québec. We identified every dispensation of warfarin, dabigatran, rivaroxaban, or apixaban for patients with AF initiating oral anticoagulants between 2010 and 2015. For each dispensation, we extracted date and duration. Refill gaps were calculated as difference between expected and observed dates of successive dispensation. Refill gaps were summarized using descriptive statistics. To account for repeated observations nested within patients and to assess the components of variance of refill gaps, we used unconditional multilevel linear models. We identified 61 516 new users. Majority were prescribed warfarin (60.3%), followed by rivaroxaban (16.4%), dabigatran (14.5%), apixaban (8.8%). Most frequent recorded duration of dispensation was 7 days, suggesting use of pharmacist-prepared weekly pillboxes. The average refill gap from multilevel model was higher for warfarin (9.28 days, 95%CI:8.97-9.59) compared with DOACs (apixaban 3.08 days, 95%CI: 2.96-3.20, dabigatran 3.70, 95%CI: 3.56-3.84, rivaroxaban 3.15, 95%CI: 3.03-3.27). The variance of refill gaps was greater among warfarin users than among DOAC users. Greater refill gaps for warfarin may reflect inadequate capture of the period covered by the number of dispensed pills recorded in administrative data. A time-dependent definition of exposure using dispensation data would lead to greater misclassification of warfarin than DOACs use. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Assessing Multivariate Constraints to Evolution across Ten Long-Term Avian Studies

    PubMed Central

    Teplitsky, Celine; Tarka, Maja; Møller, Anders P.; Nakagawa, Shinichi; Balbontín, Javier; Burke, Terry A.; Doutrelant, Claire; Gregoire, Arnaud; Hansson, Bengt; Hasselquist, Dennis; Gustafsson, Lars; de Lope, Florentino; Marzal, Alfonso; Mills, James A.; Wheelwright, Nathaniel T.; Yarrall, John W.; Charmantier, Anne

    2014-01-01

    Background In a rapidly changing world, it is of fundamental importance to understand processes constraining or facilitating adaptation through microevolution. As different traits of an organism covary, genetic correlations are expected to affect evolutionary trajectories. However, only limited empirical data are available. Methodology/Principal Findings We investigate the extent to which multivariate constraints affect the rate of adaptation, focusing on four morphological traits often shown to harbour large amounts of genetic variance and considered to be subject to limited evolutionary constraints. Our data set includes unique long-term data for seven bird species and a total of 10 populations. We estimate population-specific matrices of genetic correlations and multivariate selection coefficients to predict evolutionary responses to selection. Using Bayesian methods that facilitate the propagation of errors in estimates, we compare (1) the rate of adaptation based on predicted response to selection when including genetic correlations with predictions from models where these genetic correlations were set to zero and (2) the multivariate evolvability in the direction of current selection to the average evolvability in random directions of the phenotypic space. We show that genetic correlations on average decrease the predicted rate of adaptation by 28%. Multivariate evolvability in the direction of current selection was systematically lower than average evolvability in random directions of space. These significant reductions in the rate of adaptation and reduced evolvability were due to a general nonalignment of selection and genetic variance, notably orthogonality of directional selection with the size axis along which most (60%) of the genetic variance is found. Conclusions These results suggest that genetic correlations can impose significant constraints on the evolution of avian morphology in wild populations. This could have important impacts on evolutionary dynamics and hence population persistence in the face of rapid environmental change. PMID:24608111

  15. Comparing Real-time Versus Delayed Video Assessments for Evaluating ACGME Sub-competency Milestones in Simulated Patient Care Environments

    PubMed Central

    Stiegler, Marjorie; Hobbs, Gene; Martinelli, Susan M; Zvara, David; Arora, Harendra; Chen, Fei

    2018-01-01

    Background Simulation is an effective method for creating objective summative assessments of resident trainees. Real-time assessment (RTA) in simulated patient care environments is logistically challenging, especially when evaluating a large group of residents in multiple simulation scenarios. To date, there is very little data comparing RTA with delayed (hours, days, or weeks later) video-based assessment (DA) for simulation-based assessments of Accreditation Council for Graduate Medical Education (ACGME) sub-competency milestones. We hypothesized that sub-competency milestone evaluation scores obtained from DA, via audio-video recordings, are equivalent to the scores obtained from RTA. Methods Forty-one anesthesiology residents were evaluated in three separate simulated scenarios, representing different ACGME sub-competency milestones. All scenarios had one faculty member perform RTA and two additional faculty members perform DA. Subsequently, the scores generated by RTA were compared with the average scores generated by DA. Variance component analysis was conducted to assess the amount of variation in scores attributable to residents and raters. Results Paired t-tests showed no significant difference in scores between RTA and averaged DA for all cases. Cases 1, 2, and 3 showed an intraclass correlation coefficient (ICC) of 0.67, 0.85, and 0.50 for agreement between RTA scores and averaged DA scores, respectively. Analysis of variance of the scores assigned by the three raters showed a small proportion of variance attributable to raters (4% to 15%). Conclusions The results demonstrate that video-based delayed assessment is as reliable as real-time assessment, as both assessment methods yielded comparable scores. Based on a department’s needs or logistical constraints, our findings support the use of either real-time or delayed video evaluation for assessing milestones in a simulated patient care environment. PMID:29736352

  16. Contribution of precipitation and reference evapotranspiration to drought indices under different climates

    NASA Astrophysics Data System (ADS)

    Vicente-Serrano, Sergio M.; Van der Schrier, Gerard; Beguería, Santiago; Azorin-Molina, Cesar; Lopez-Moreno, Juan-I.

    2015-07-01

    In this study we analyzed the sensitivity of four drought indices to precipitation (P) and reference evapotranspiration (ETo) inputs. The four drought indices are the Palmer Drought Severity Index (PDSI), the Reconnaissance Drought Index (RDI), the Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Palmer Drought Index (SPDI). The analysis uses long-term simulated series with varying averages and variances, as well as global observational data to assess the sensitivity to real climatic conditions in different regions of the World. The results show differences in the sensitivity to ETo and P among the four drought indices. The PDSI shows the lowest sensitivity to variation in their climate inputs, probably as a consequence of the standardization procedure of soil water budget anomalies. The RDI is only sensitive to the variance but not to the average of P and ETo. The SPEI shows the largest sensitivity to ETo variation, with clear geographic patterns mainly controlled by aridity. The low sensitivity of the PDSI to ETo makes the PDSI perhaps less apt as the suitable drought index in applications in which the changes in ETo are most relevant. On the contrary, the SPEI shows equal sensitivity to P and ETo. It works as a perfect supply and demand system modulated by the average and standard deviation of each series and combines the sensitivity of the series to changes in magnitude and variance. Our results are a robust assessment of the sensitivity of drought indices to P and ETo variation, and provide advice on the use of drought indices to detect climate change impacts on drought severity under a wide variety of climatic conditions.

  17. The Anesthetic Efficacy of Articaine and Lidocaine in Equivalent Doses as Buccal and Non-Palatal Infiltration for Maxillary Molar Extraction: A Randomized, Double-Blinded, Placebo-Controlled Clinical Trial.

    PubMed

    Majid, Omer Waleed; Ahmed, Aws Mahmood

    2018-04-01

    The purpose of the present study was to evaluate the anesthetic adequacy of 4% articaine 1.8 mL versus 2% lidocaine 3.6 mL without palatal injection compared with the standard technique for the extraction of maxillary molar teeth. This randomized, double-blinded, placebo-controlled clinical trial included patients requiring extraction of 1 maxillary molar under local anesthesia. Patients were randomly distributed into 1 of 3 groups: group A received 4% articaine 1.8 mL as a buccal injection and 0.2 mL as a palatal injection, group B received 4% articaine 1.8 mL plus normal saline 0.2 mL as a palatal injection, and group C received 2% lidocaine 3.6 mL plus normal saline 0.2 mL as a palatal injection. Pain was measured during injection, 8 minutes afterward, and during extraction using a visual analog scale. Initial palatal anesthesia and patients' satisfaction were measured using a 5-score verbal rating scale. Statistical analyses included descriptive statistics, analysis of variance, and Pearson χ 2 test. Differences with a P value less than .05 were considered significant. Eighty-four patients were included in the study. The average pain of injection was comparable among all study groups (P = .933). Pain during extraction in the articaine group was significantly less than that experienced in the placebo groups (P < .001), although the differences between placebo groups were insignificant. Satisfaction scores were significantly higher in the articaine group compared with the placebo groups (P < .001), with comparable results between placebo groups. Although the anesthetic effects of single placebo-controlled buccal injections of 4% articaine and 2% lidocaine were comparable, the level of anesthetic adequacy was statistically less than that achieved by 4% articaine given by the standard technique. These results do not justify the buccal and non-palatal infiltration of articaine or lidocaine as an effective alternative to the standard technique in the extraction of maxillary molar teeth. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lentine, Anthony L.; Cox, Jonathan Albert

    Methods and systems for stabilizing a resonant modulator include receiving pre-modulation and post-modulation portions of a carrier signal, determining the average power from these portions, comparing an average input power to the average output power, and operating a heater coupled to the modulator based on the comparison. One system includes a pair of input structures, one or more processing elements, a comparator, and a control element. The input structures are configured to extract pre-modulation and post-modulation portions of a carrier signal. The processing elements are configured to determine average powers from the extracted portions. The comparator is configured to comparemore » the average input power and the average output power. The control element operates a heater coupled to the modulator based on the comparison.« less

  19. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  20. Replication of a gene-environment interaction Via Multimodel inference: additive-genetic variance in adolescents' general cognitive ability increases with family-of-origin socioeconomic status.

    PubMed

    Kirkpatrick, Robert M; McGue, Matt; Iacono, William G

    2015-03-01

    The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES-an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research.

  1. Replication of a Gene-Environment Interaction via Multimodel Inference: Additive-Genetic Variance in Adolescents’ General Cognitive Ability Increases with Family-of-Origin Socioeconomic Status

    PubMed Central

    Kirkpatrick, Robert M.; McGue, Matt; Iacono, William G.

    2015-01-01

    The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES—an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research. PMID:25539975

  2. Selenium speciation and extractability in Dutch agricultural soils.

    PubMed

    Supriatin, Supriatin; Weng, Liping; Comans, Rob N J

    2015-11-01

    The study aimed to understand selenium (Se) speciation and extractability in Dutch agricultural soils. Top soil samples were taken from 42 grassland fields and 41 arable land fields in the Netherlands. Total Se contents measured in aqua regia were between 0.12 and 1.97 mg kg(-1) (on average 0.58 mg kg(-1)). Organic Se after NaOCl oxidation-extraction accounted for on average 82% of total Se, whereas inorganic selenite (selenate was not measurable) measured in ammonium oxalate extraction using HPLC-ICP-MS accounted for on average 5% of total Se. The predominance of organic Se in the soils is supported by the positive correlations between total Se (aqua regia) and total soil organic matter content, and Se and organic C content in all the other extractions performed in this study. The amount of Se extracted followed the order of aqua regia > 1 M NaOCl (pH8) > 0.1 M NaOH>ammonium oxalate (pH3) > hot water>0.43 M HNO3 > 0.01 M CaCl2. None of these extractions selectively extracts only inorganic Se, and relative to other extractions 0.43 M HNO3 extraction contains the lowest fraction of organic Se, followed by ammonium oxalate extraction. In the 0.1M NaOH extraction, the hydrophobic neutral (HON) fraction of soil organic matter is richer in Se than in the hydrophilic (Hy) and humic acid (HA) fractions. The organic matter extracted in 0.01 M CaCl2 and hot water is in general richer in Se compared to the organic matter extracted in 0.1M NaOH, and other extractions (HNO3, ammonium oxalate, NaOCl, and aqua regia). Although the extractability of Se follows to a large extent the extractability of soil organic carbon, there is several time variations in the Se to organic C ratios, reflecting the changes in composition of organic matter extracted. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. REGIONAL SEISMIC CHEMICAL AND NUCLEAR EXPLOSION DISCRIMINATION: WESTERN U.S. EXAMPLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, W R; Taylor, S R; Matzel, E

    2006-07-07

    We continue exploring methodologies to improve regional explosion discrimination using the western U.S. as a natural laboratory. The western U.S. has abundant natural seismicity, historic nuclear explosion data, and widespread mine blasts, making it a good testing ground to study the performance of regional explosion discrimination techniques. We have assembled and measured a large set of these events to systematically explore how to best optimize discrimination performance. Nuclear explosions can be discriminated from a background of earthquakes using regional phase (Pn, Pg, Sn, Lg) amplitude measures such as high frequency P/S ratios. The discrimination performance is improved if the amplitudesmore » can be corrected for source size and path length effects. We show good results are achieved using earthquakes alone to calibrate for these effects with the MDAC technique (Walter and Taylor, 2001). We show significant further improvement is then possible by combining multiple MDAC amplitude ratios using an optimized weighting technique such as Linear Discriminant Analysis (LDA). However this requires data or models for both earthquakes and explosions. In many areas of the world regional distance nuclear explosion data is lacking, but mine blast data is available. Mine explosions are often designed to fracture and/or move rock, giving them different frequency and amplitude behavior than contained chemical shots, which seismically look like nuclear tests. Here we explore discrimination performance differences between explosion types, the possible disparity in the optimization parameters that would be chosen if only chemical explosions were available and the corresponding effect of that disparity on nuclear explosion discrimination. Even after correcting for average path and site effects, regional phase ratios contain a large amount of scatter. This scatter appears to be due to variations in source properties such as depth, focal mechanism, stress drop, in the near source material properties (including emplacement conditions in the case of explosions) and in variations from the average path and site correction. Here we look at several kinds of averaging as a means to try and reduce variance in earthquake and explosion populations and better understand the factors going into a minimum variance level as a function of epicenter (see Anderson ee et al. this volume). We focus on the performance of P/S ratios over the frequency range from 1 to 16 Hz finding some improvements in discrimination as frequency increases. We also explore averaging and optimally combining P/S ratios in multiple frequency bands as a means to reduce variance. Similarly we explore the effects of azimuthally averaging both regional amplitudes and amplitude ratios over multiple stations to reduce variance. Finally we look at optimal performance as a function of magnitude and path length, as these put limits the availability of good high frequency discrimination measures.« less

  4. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  5. Amplified fragment length polymorphism mapping of quantitative trait loci for malaria parasite susceptibility in the yellow fever mosquito Aedes aegypti.

    PubMed

    Zhong, Daibin; Menge, David M; Temu, Emmanuel A; Chen, Hong; Yan, Guiyun

    2006-07-01

    The yellow fever mosquito Aedes aegypti has been the subject of extensive genetic research due to its medical importance and the ease with which it can be manipulated in the laboratory. A molecular genetic linkage map was constructed using 148 amplified fragment length polymorphism (AFLP) and six single-strand conformation polymorphism (SSCP) markers. Eighteen AFLP primer combinations were used to genotype two reciprocal F2 segregating populations. Each primer combination generated an average of 8.2 AFLP markers eligible for linkage mapping. The length of the integrated map was 180.9 cM, giving an average marker resolution of 1.2 cM. Composite interval mapping revealed a total of six QTL significantly affecting Plasmodium susceptibility in the two reciprocal crosses of Ae. aegypti. Two common QTL on linkage group 2 were identified in both crosses that had similar effects on the phenotype, and four QTL were unique to each cross. In one cross, the four main QTL accounted for 64% of the total phenotypic variance, and digenic epistasis explained 11.8% of the variance. In the second cross, the four main QTL explained 66% of the variance, and digenic epistasis accounted for 16% of the variance. The actions of these QTL were either dominance or underdominance. Our results indicated that at least three new QTL were mapped on chromosomes 1 and 3. The polygenic nature of susceptibility to P. gallinaceum and epistasis are important factors for significant variation within or among mosquito strains. The new map provides additional information useful for further genetic investigation, such as identification of new genes and positional cloning.

  6. Combining statistics from two national complex surveys to estimate injury rates per hour exposed and variance by activity in the USA.

    PubMed

    Lin, Tin-Chi; Marucci-Wellman, Helen R; Willetts, Joanna L; Brennan, Melanye J; Verma, Santosh K

    2016-12-01

    A common issue in descriptive injury epidemiology is that in order to calculate injury rates that account for the time spent in an activity, both injury cases and exposure time of specific activities need to be collected. In reality, few national surveys have this capacity. To address this issue, we combined statistics from two different national complex surveys as inputs for the numerator and denominator to estimate injury rate, accounting for the time spent in specific activities and included a procedure to estimate variance using the combined surveys. The 2010 National Health Interview Survey (NHIS) was used to quantify injuries, and the 2010 American Time Use Survey (ATUS) was used to quantify time of exposure to specific activities. The injury rate was estimated by dividing the average number of injuries (from NHIS) by average exposure hours (from ATUS), both measured for specific activities. The variance was calculated using the 'delta method', a general method for variance estimation with complex surveys. Among the five types of injuries examined, 'sport and exercise' had the highest rate (12.64 injuries per 100 000 h), followed by 'working around house/yard' (6.14), driving/riding a motor vehicle (2.98), working (1.45) and sleeping/resting/eating/drinking (0.23). The results show a ranking of injury rate by activity quite different from estimates using population as the denominator. Our approach produces an estimate of injury risk which includes activity exposure time and may more reliably reflect the underlying injury risks, offering an alternative method for injury surveillance and research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Parametric correlation functions to model the structure of permanent environmental (co)variances in milk yield random regression models.

    PubMed

    Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G

    2009-09-01

    The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.

  8. Accountability and Grade Inflation in a Rural School.

    ERIC Educational Resources Information Center

    Goodwin, Deborah Hayes; Holman, David M.

    In an effort to hold schools accountable, Arkansas added grade inflation into the accountability system. The Arkansas Legislature mandated that the Arkansas Department of Education identify high schools with "statistically significant variance" between students' grade point averages (GPAs) and ACT performances. A grade inflation index…

  9. Triple collocation based merging of satellite soil moisture retrievals

    USDA-ARS?s Scientific Manuscript database

    We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...

  10. Statistical properties of the anomalous scaling exponent estimator based on time-averaged mean-square displacement

    NASA Astrophysics Data System (ADS)

    Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis

    2017-08-01

    The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.

  11. Adolescent Characters and Alcohol Use Scenes in Brazilian Movies, 2000-2008.

    PubMed

    Castaldelli-Maia, João Mauricio; de Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh

    2016-04-01

    Quantitative structured assessment of 193 scenes depicting substance use from a convenience sample of 50 Brazilian movies was performed. Logistic regression and analysis of variance or multivariate analysis of variance models were employed to test for two different types of outcome regarding alcohol appearance: The mean length of alcohol scenes in seconds and the prevalence of alcohol use scenes. The presence of adolescent characters was associated with a higher prevalence of alcohol use scenes compared to nonalcohol use scenes. The presence of adolescents was also associated with a higher than average length of alcohol use scenes compared to the nonalcohol use scenes. Alcohol use was negatively associated with cannabis, cocaine, and other drugs use. However, when the use of cannabis, cocaine, or other drugs was present in the alcohol use scenes, a higher average length was found. This may mean that most vulnerable group may see drinking as a more attractive option leading to higher alcohol use. © The Author(s) 2016.

  12. Design with limited anthropometric data: A method of interpreting sums of percentiles in anthropometric design.

    PubMed

    Albin, Thomas J

    2017-07-01

    Occasionally practitioners must work with single dimensions defined as combinations (sums or differences) of percentile values, but lack information (e.g. variances) to estimate the accommodation achieved. This paper describes methods to predict accommodation proportions for such combinations of percentile values, e.g. two 90th percentile values. Kreifeldt and Nah z-score multipliers were used to estimate the proportions accommodated by combinations of percentile values of 2-15 variables; two simplified versions required less information about variance and/or correlation. The estimates were compared to actual observed proportions; for combinations of 2-15 percentile values the average absolute differences ranged between 0.5 and 1.5 percentage points. The multipliers were also used to estimate adjusted percentile values, that, when combined, estimate a desired proportion of the combined measurements. For combinations of two and three adjusted variables, the average absolute difference between predicted and observed proportions ranged between 0.5 and 3.0 percentage points. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Dry column-thermal energy analyzer method for determining N-nitrosopyrrolidine in fried bacon: collaborative study.

    PubMed

    Fiddler, W; Pensabene, J W; Gates, R A; Phillips, J G

    1984-01-01

    A dry column method for isolating N-nitrosopyrrolidine (NPYR) from fried, cure-pumped bacon and detection by gas chromatography-thermal energy analyzer (TEA) was studied collaboratively. Testing the results obtained from 11 collaborators for homogeneous variances among samples resulted in splitting the nonzero samples into 2 groups of sample levels, each with similar variances. Outlying results were identified by AOAC-recommended procedures, and laboratories having outliers within a group were excluded. Results from the 9 collaborators remaining in the low group yielded coefficients of variation (CV) of 6.00% and 7.47% for repeatability and reproducibility, respectively, and the 8 collaborators remaining in the high group yielded CV values of 5.64% and 13.72%, respectively. An 85.2% overall average recovery of the N-nitrosoazetidine internal standard was obtained with an average laboratory CV of 10.5%. The method has been adopted official first action as an alternative to the mineral oil distillation-TEA screening procedure.

  14. Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.

    PubMed

    Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong

    2017-09-01

    An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.

  15. A Quantitative Microscopy Technique for Determining the Number of Specific Proteins in Cellular Compartments

    PubMed Central

    Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.

    2013-01-01

    This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731

  16. The Brazilian version of the 20-item rapid estimate of adult literacy in medicine and dentistry.

    PubMed

    Cruvinel, Agnes Fátima P; Méndez, Daniela Alejandra C; Oliveira, Juliana G; Gutierres, Eliézer; Lotto, Matheus; Machado, Maria Aparecida A M; Oliveira, Thaís M; Cruvinel, Thiago

    2017-01-01

    The misunderstanding of specific vocabulary may hamper the patient-health provider communication. The 20-item Rapid Estimate Adult Literacy in Medicine and Dentistry (REALMD-20) was constructed to screen patients by their ability in reading medical/dental terminologies in a simple and rapid way. This study aimed to perform the cross-cultural adaptation and validation of this instrument for its application in Brazilian dental patients. The cross-cultural adaptation was performed through conceptual equivalence, verbatim translation, semantic, item and operational equivalence, and back-translation. After that, 200 participants responded the adapted version of the REALMD-20, the Brazilian version of the Rapid Estimate of Adult Literacy in Dentistry (BREALD-30), ten questions of the Brazilian National Functional Literacy Index (BNFLI), and a questionnaire with socio-demographic and oral health-related questions. Statistical analysis was conducted to assess the reliability and validity of the REALMD-20 ( P  < 0.05). The sample was composed predominantly by women (55.5%) and white/brown (76%) individuals, with an average age of 39.02 years old (±15.28). The average REALMD-20 score was 17.48 (±2.59, range 8-20). It displayed a good internal consistency (Cronbach's alpha = 0.789) and test-retest reliability ( ICC  = 0.73; 95% CI [0.66 - 0.79]). In the exploratory factor analysis, six factors were extracted according to Kaiser's criterion. The factor I (eigenvalue = 4.53) comprised four terms- "Jaundice" , " Amalgam ", " Periodontitis " and "Abscess" -accounted for 25.18% of total variance, while the factor II (eigenvalue = 1.88) comprised other four terms-" Gingivitis ", " Instruction ", " Osteoporosis " and " Constipation "-accounted for 10.46% of total variance. The first four factors accounted for 52.1% of total variance. The REALMD-20 was positively correlated with the BREALD-30 ( Rs  = 0.73, P  < 0.001) and BNFLI ( Rs  = 0.60, P  < 0.001). The scores were significantly higher among health professionals, more educated people, and individuals who reported good/excellent oral health conditions, and who sought preventive dental services. Distinctly, REALMD-20 scores were similar between both participants who visited a dentist <1 year ago and ≥1 year. Also, REALMD-20 was a significant predictor of self-reported oral health status in a multivariate logistic regression model, considering socio-demographic and oral health-related confounding variables. The Brazilian version of the REALMD-20 demonstrated adequate psychometric properties for screening dental patients in relation to their recognition of health specific terms. This instrument can contribute to identify individuals with important dental/medical vocabulary limitations in order to improve the health education and outcomes in a person-centered care model.

  17. A Simple Approach for Monitoring Business Service Time Variation

    PubMed Central

    2014-01-01

    Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart) and an asymmetric EWMA mean chart (EWMA-AM chart) based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation) and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended. PMID:24895647

  18. A simple approach for monitoring business service time variation.

    PubMed

    Yang, Su-Fen; Arnold, Barry C

    2014-01-01

    Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart) and an asymmetric EWMA mean chart (EWMA-AM chart) based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation) and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended.

  19. Alveolar ridge preservation of an extraction socket using autogenous tooth bone graft material for implant site development: prospective case series

    PubMed Central

    Yun, Pil-Young; Um, In-Woong; Lee, Hyo-Jung; Yi, Yang-Jin; Bae, Ji-Hyun; Lee, Junho

    2014-01-01

    This case series evaluated the clinical efficacy of autogenous tooth bone graft material (AutoBT) in alveolar ridge preservation of an extraction socket. Thirteen patients who received extraction socket graft using AutoBT followed by delayed implant placements from Nov. 2008 to Aug. 2010 were evaluated. A total of fifteen implants were placed. The primary and secondary stability of the placed implants were an average of 58 ISQ and 77.9 ISQ, respectively. The average amount of crestal bone loss around the implant was 0.05 mm during an average of 22.5 months (from 12 to 34 months) of functional loading. Newly formed tissues were evident from the 3-month specimen. Within the limitations of this case, autogenous tooth bone graft material can be a favorable bone substitute for extraction socket graft due to its good bone remodeling and osteoconductivity. PMID:25551013

  20. Genetic interactions contribute less than additive effects to quantitative trait variation in yeast

    PubMed Central

    Bloom, Joshua S.; Kotenko, Iulia; Sadhu, Meru J.; Treusch, Sebastian; Albert, Frank W.; Kruglyak, Leonid

    2015-01-01

    Genetic mapping studies of quantitative traits typically focus on detecting loci that contribute additively to trait variation. Genetic interactions are often proposed as a contributing factor to trait variation, but the relative contribution of interactions to trait variation is a subject of debate. Here we use a very large cross between two yeast strains to accurately estimate the fraction of phenotypic variance due to pairwise QTL–QTL interactions for 20 quantitative traits. We find that this fraction is 9% on average, substantially less than the contribution of additive QTL (43%). Statistically significant QTL–QTL pairs typically have small individual effect sizes, but collectively explain 40% of the pairwise interaction variance. We show that pairwise interaction variance is largely explained by pairs of loci at least one of which has a significant additive effect. These results refine our understanding of the genetic architecture of quantitative traits and help guide future mapping studies. PMID:26537231

  1. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    NASA Astrophysics Data System (ADS)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  2. Predicting negative drinking consequences: examining descriptive norm perception.

    PubMed

    Benton, Stephen L; Downey, Ronald G; Glider, Peggy S; Benton, Sherry A; Shin, Kanghyun; Newton, Douglas W; Arck, William; Price, Amy

    2006-05-01

    This study explored how much variance in college student negative drinking consequences is explained by descriptive norm perception, beyond that accounted for by student gender and self-reported alcohol use. A derivation sample (N=7565; 54% women) and a replication sample (N=8924; 55.5% women) of undergraduate students completed the Campus Alcohol Survey in classroom settings. Hierarchical regression analyses revealed that student gender and average number of drinks when "partying" were significantly related to harmful consequences resulting from drinking. Men reported more consequences than did women, and drinking amounts were positively correlated with consequences. However, descriptive norm perception did not explain any additional variance beyond that attributed to gender and alcohol use. Furthermore, there was no significant three-way interaction among student gender, alcohol use, and descriptive norm perception. Norm perception contributed no significant variance in explaining harmful consequences beyond that explained by college student gender and alcohol use.

  3. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    PubMed

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  4. Tide Gauge Records Reveal Improved Processing of Gravity Recovery and Climate Experiment Time-Variable Mass Solutions over the Coastal Ocean

    NASA Astrophysics Data System (ADS)

    Piecuch, Christopher G.; Landerer, Felix W.; Ponte, Rui M.

    2018-05-01

    Monthly ocean bottom pressure solutions from the Gravity Recovery and Climate Experiment (GRACE), derived using surface spherical cap mass concentration (MC) blocks and spherical harmonics (SH) basis functions, are compared to tide gauge (TG) monthly averaged sea level data over 2003-2015 to evaluate improved gravimetric data processing methods near the coast. MC solutions can explain ≳ 42% of the monthly variance in TG time series over broad shelf regions and in semi-enclosed marginal seas. MC solutions also generally explain ˜5-32 % more TG data variance than SH estimates. Applying a coastline resolution improvement algorithm in the GRACE data processing leads to ˜ 31% more variance in TG records explained by the MC solution on average compared to not using this algorithm. Synthetic observations sampled from an ocean general circulation model exhibit similar patterns of correspondence between modeled TG and MC time series and differences between MC and SH time series in terms of their relationship with TG time series, suggesting that observational results here are generally consistent with expectations from ocean dynamics. This work demonstrates the improved quality of recent MC solutions compared to earlier SH estimates over the coastal ocean, and suggests that the MC solutions could be a useful tool for understanding contemporary coastal sea level variability and change.

  5. Female scarcity reduces women's marital ages and increases variance in men's marital ages.

    PubMed

    Kruger, Daniel J; Fitzgerald, Carey J; Peterson, Tom

    2010-08-05

    When women are scarce in a population relative to men, they have greater bargaining power in romantic relationships and thus may be able to secure male commitment at earlier ages. Male motivation for long-term relationship commitment may also be higher, in conjunction with the motivation to secure a prospective partner before another male retains her. However, men may also need to acquire greater social status and resources to be considered marriageable. This could increase the variance in male marital age, as well as the average male marital age. We calculated the Operational Sex Ratio, and means, medians, and standard deviations in marital ages for women and men for the 50 largest Metropolitan Statistical Areas in the United States with 2000 U.S Census data. As predicted, where women are scarce they marry earlier on average. However, there was no significant relationship with mean male marital ages. The variance in male marital age increased with higher female scarcity, contrasting with a non-significant inverse trend for female marital age variation. These findings advance the understanding of the relationship between the OSR and marital patterns. We believe that these results are best accounted for by sex specific attributes of reproductive value and associated mate selection criteria, demonstrating the power of an evolutionary framework for understanding human relationships and demographic patterns.

  6. Predicting sugar-sweetened behaviours with theory of planned behaviour constructs: Outcome and process results from the SIPsmartER behavioural intervention

    PubMed Central

    Zoellner, Jamie M.; Porter, Kathleen J.; Chen, Yvonnes; Hedrick, Valisa E.; You, Wen; Hickman, Maja; Estabrooks, Paul A.

    2017-01-01

    Objective Guided by the theory of planned behaviour (TPB) and health literacy concepts, SIPsmartER is a six-month multicomponent intervention effective at improving SSB behaviours. Using SIPsmartER data, this study explores prediction of SSB behavioural intention (BI) and behaviour from TPB constructs using: (1) cross-sectional and prospective models and (2) 11 single-item assessments from interactive voice response (IVR) technology. Design Quasi-experimental design, including pre- and post-outcome data and repeated-measures process data of 155 intervention participants. Main Outcome Measures Validated multi-item TPB measures, single-item TPB measures, and self-reported SSB behaviours. Hypothesised relationships were investigated using correlation and multiple regression models. Results TPB constructs explained 32% of the variance cross sectionally and 20% prospectively in BI; and explained 13–20% of variance cross sectionally and 6% prospectively. Single-item scale models were significant, yet explained less variance. All IVR models predicting BI (average 21%, range 6–38%) and behaviour (average 30%, range 6–55%) were significant. Conclusion Findings are interpreted in the context of other cross-sectional, prospective and experimental TPB health and dietary studies. Findings advance experimental application of the TPB, including understanding constructs at outcome and process time points and applying theory in all intervention development, implementation and evaluation phases. PMID:28165771

  7. Decadal climate prediction in the large ensemble limit

    NASA Astrophysics Data System (ADS)

    Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.

    2017-12-01

    In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.

  8. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  9. A study of Solar-Enso correlation with southern Brazil tree ring index (1955- 1991)

    NASA Astrophysics Data System (ADS)

    Rigozo, N.; Nordemann, D.; Vieira, L.; Echer, E.

    The effects of solar activity and El Niño-Southern Oscillation on tree growth in Southern Brazil were studied by correlation analysis. Trees for this study were native Araucaria (Araucaria Angustifolia)from four locations in Rio Grande do Sul State, in Southern Brazil: Canela (29o18`S, 50o51`W, 790 m asl), Nova Petropolis (29o2`S, 51o10`W, 579 m asl), Sao Francisco de Paula (29o25`S, 50o24`W, 930 m asl) and Sao Martinho da Serra (29o30`S, 53o53`W, 484 m asl). From these four sites, an average tree ring Index for this region was derived, for the period 1955-1991. Linear correlations were made on annual and 10 year running averages of this tree ring Index, of sunspot number Rz and SOI. For annual averages, the correlation coefficients were low, and the multiple regression between tree ring and SOI and Rz indicates that 20% of the variance in tree rings was explained by solar activity and ENSO variability. However, when the 10 year running averages correlations were made, the coefficient correlations were much higher. A clear anticorrelation is observed between SOI and Index (r=-0.81) whereas Rz and Index show a positive correlation (r=0.67). The multiple regression of 10 year running averages indicates that 76% of the variance in tree ring INdex was explained by solar activity and ENSO. These results indicate that the effects of solar activity and ENSO on tree rings are better seen on long timescales.

  10. Cross National Study on Pre-Service Elementary and Science Teachers' Opinions on Science Teaching

    ERIC Educational Resources Information Center

    Šorgo, Andrej; Pipenbaher, Nataša; Šašic, Slavica Šimic; Prokop, Pavol; Kubiatko, Milan; Golob, Nika; Erdogan, Mehmet; Tomažic, Iztok; Bilek, Martin; Fancovicova, Jana; Lamanauskas, Vincentas; Usak, Muhammet

    2015-01-01

    Cross national study on opinions on science teaching was revealed on a sample of 1799 (596 males, 1203 females) pre-service elementary and science teachers' enrolled in various departments at selected universities in Croatia, Czech Republic, Lithuania, Slovakia, Slovenia and Turkey. Three factors explaining 43.4% of variance were extracted from a…

  11. On the Fallibility of Principal Components in Research

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Li, Tenglong

    2017-01-01

    The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…

  12. Extracting Intrinsic Functional Networks with Feature-Based Group Independent Component Analysis

    ERIC Educational Resources Information Center

    Calhoun, Vince D.; Allen, Elena

    2013-01-01

    There is increasing use of functional imaging data to understand the macro-connectome of the human brain. Of particular interest is the structure and function of intrinsic networks (regions exhibiting temporally coherent activity both at rest and while a task is being performed), which account for a significant portion of the variance in…

  13. Method and system for modulation of gain suppression in high average power laser systems

    DOEpatents

    Bayramian, Andrew James [Manteca, CA

    2012-07-31

    A high average power laser system with modulated gain suppression includes an input aperture associated with a first laser beam extraction path and an output aperture associated with the first laser beam extraction path. The system also includes a pinhole creation laser having an optical output directed along a pinhole creation path and an absorbing material positioned along both the first laser beam extraction path and the pinhole creation path. The system further includes a mechanism operable to translate the absorbing material in a direction crossing the first laser beam extraction laser path and a controller operable to modulate the second laser beam.

  14. Are U.S. Military Interventions Contagious over Time? Intervention Timing and Its Implications for Force Planning

    DTIC Science & Technology

    2013-01-01

    29 3.5. ARIMA Models , Temporal Clustering of Conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.6...39 3.9. ARIMA Models ...variance across a distribution. Autoregressive integrated moving average ( ARIMA ) models are used with time-series data sets and are designed to capture

  15. Obtaining the variance of gametic diversity with genomic models

    USDA-ARS?s Scientific Manuscript database

    It may be possible to use information about the variability among gametes (spermatozoa and ova) to select parents that are more likely than average to produce offspring with extremely high or low breeding values. In this study, statistical formulae were developed to calculate variability among gamet...

  16. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  17. Machine protection system for rotating equipment and method

    DOEpatents

    Lakshminarasimha, Arkalgud N.; Rucigay, Richard J.; Ozgur, Dincer

    2003-01-01

    A machine protection system and method for rotating equipment introduces new alarming features and makes use of full proximity probe sensor information, including amplitude and phase. Baseline vibration amplitude and phase data is estimated and tracked according to operating modes of the rotating equipment. Baseline vibration and phase data can be determined using a rolling average and variance and stored in a unit circle or tracked using short term average and long term average baselines. The sensed vibration amplitude and phase is compared with the baseline vibration amplitude and phase data. Operation of the rotating equipment can be controlled based on the vibration amplitude and phase.

  18. Phytochemical Constituents and Antimicrobial Activity of the Ethanol and Chloroform Crude Leaf Extracts of Spathiphyllum cannifolium (Dryand. ex Sims) Schott.

    PubMed

    Dhayalan, Arunachalam; Gracilla, Daniel E; Dela Peña, Renato A; Malison, Marilyn T; Pangilinan, Christian R

    2018-01-01

    The study investigated the medicinal properties of Spathiphyllum cannifolium (Dryand. ex Sims) Schott as a possible source of antimicrobial compounds. The phytochemical constituents were screened using qualitative methods and the antibacterial and antifungal activities were determined using agar well diffusion method. One-way analysis of variance and Fisher's least significant difference test were used. The phytochemical screening showed the presence of sterols, flavonoids, alkaloids, saponins, glycosides, and tannins in both ethanol and chloroform leaf extracts, but triterpenes were detected only in the ethanol leaf extract. The antimicrobial assay revealed that the chloroform leaf extract inhibited Candida albicans, Escherichia coli, Staphylococcus aureus, Bacillus subtilis, and Pseudomonas aeruginosa , whereas the ethanol leaf extract inhibited E. coli , S. aureus , and B. subtilis only. The ethanol and chloroform leaf extracts exhibited the highest zone of inhibition against B. subtilis . The antifungal assay showed that both the leaf extracts have no bioactivity against Aspergillus niger and C. albicans . Results suggest that chloroform is the better solvent for the extraction of antimicrobial compounds against the test organisms used in this study. Findings of this research will add new knowledge in advancing drug discovery and development in the Philippines.

  19. Chemical characteristic and functional properties of arenga starch-taro (Colocasia esculanta L.) flour noodle with turmeric extracts addition

    NASA Astrophysics Data System (ADS)

    Ervika Rahayu N., H.; Ariani, Dini; Miftakhussolikhah, E., Maharani P.; Yudi, P.

    2017-01-01

    Arenga starch-taro (Colocasia esculanta L.) flour noodle is an alternative carbohydrate source made from 75% arenga starch and 25% taro flour, but it has a different color with commercial noodle product. The addition of natural color from turmeric may change the consumer preference and affect chemical characteristic and functional properties of noodle. This research aims to identify chemical characteristic and functional properties of arenga starch-taro flour noodle with turmeric extract addition. Extraction was performed using 5 variances of turmeric rhizome (0.06; 0.12; 0.18; 0.24; and 0.30 g (fresh weight/ml water). Then, noodle was made and chemical characteristic (proximate analysis) as well as functional properties (amylose, resistant starch, dietary fiber, antioxidant activity) were then evaluated. The result showed that addition of turmeric extract did not change protein, fat, carbohydrate, amylose, and resistant starch content significantly, while antioxidant activity was increased (23,41%) with addition of turmeric extract.

  20. Genetic analysis of Holstein cattle populations in Brazil and the United States.

    PubMed

    Costa, C N; Blake, R W; Pollak, E J; Oltenacu, P A; Quaas, R L; Searle, S R

    2000-12-01

    Genetic relationships between Brazilian and US Holstein cattle populations were studied using first-lactation records of 305-d mature equivalent (ME) yields of milk and fat of daughters of 705 sires in Brazil and 701 sires in the United States, 358 of which had progeny in both countries. Components of(co)variance and genetic parameters were estimated from all data and from within herd-year standard deviation for milk (HYSD) data files using bivariate and multivariate sire models and DFREML procedures distinguishing the two countries. Sire (residual) variances from all data for milk yield were 51 to 59% (58 to 101%) as large in Brazil as those obtained from half-sisters in the average US herd. Corresponding proportions of the US variance in fat yield that were found in Brazil were 30 to 41% for the sire component of variance and 48 to 80% for the residual. Heritabilities for milk and fat yields from multivariate analysis of all the data were 0.25 and 0.22 in Brazil, and 0.34 and 0.35 in the United States. Genetic correlations between milk and fat were 0.79 in Brazil and 0.62 in the United States. Genetic correlations between countries were 0.85 for milk, 0.88 for fat, 0.55 for milk in Brazil and fat in the US, and 0.67 for fat in Brazil and milk in the United States. Correlated responses in Brazil from sire selection based on the US information increased with average HYSD in Brazil. Largest daughter yield response was predicted from information from half-sisters in low HYSD US herds (0.75 kg/kg for milk; 0.63 kg/kg for fat), which was 14% to 17% greater than estimates from all US herds because the scaling effects were less severe from heterogeneous variances. Unequal daughter response from unequal genetic (co)variances under restrictive Brazilian conditions is evidence for the interaction of genotype and environment. The smaller and variable yield expectations of daughters of US sires in Brazilian environments suggest the need for specific genetic improvement strategies in Brazilian Holstein herds. A US data file restricting daughter information to low HYSD US environments would be a wise choice for across-country evaluation. Procedures to incorporate such foreign evaluations should be explored to improve the accuracy of genetic evaluations for the Brazilian Holstein population.

  1. On weak lensing shape noise

    NASA Astrophysics Data System (ADS)

    Niemi, Sami-Matias; Kitching, Thomas D.; Cropper, Mark

    2015-12-01

    One of the most powerful techniques to study the dark sector of the Universe is weak gravitational lensing. In practice, to infer the reduced shear, weak lensing measures galaxy shapes, which are the consequence of both the intrinsic ellipticity of the sources and of the integrated gravitational lensing effect along the line of sight. Hence, a very large number of galaxies is required in order to average over their individual properties and to isolate the weak lensing cosmic shear signal. If this `shape noise' can be reduced, significant advances in the power of a weak lensing surveys can be expected. This paper describes a general method for extracting the probability distributions of parameters from catalogues of data using Voronoi cells, which has several applications, and has synergies with Bayesian hierarchical modelling approaches. This allows us to construct a probability distribution for the variance of the intrinsic ellipticity as a function of galaxy property using only photometric data, allowing a reduction of shape noise. As a proof of concept the method is applied to the CFHTLenS survey data. We use this approach to investigate trends of galaxy properties in the data and apply this to the case of weak lensing power spectra.

  2. New insights into the crowd characteristics in Mina

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Weng, W. G.; Zhang, X. L.

    2014-11-01

    The significance of the study of the characteristics of crowd behavior is indubitable for safely organizing mass activities. There is insufficient material to conduct such research. In this paper, the Mina crowd disaster is quantitatively re-investigated. Its instantaneous velocity field is extracted from video material based on the cross-correlation algorithm. The properties of the stop-and-go waves, including fluctuation frequencies, wave propagation speeds, characteristic speeds, and time and space averaged velocity variances, are analyzed in detail. Thus, the database of the stop-and-go wave features is enriched, which is very important to crowd studies. The ‘turbulent’ flows are investigated with the proper orthogonal decomposition (POD) method which is widely used in fluid mechanics. And time series and spatial analysis are conducted to investigate the characteristics of the ‘turbulent’ flows. In this paper, the coherent structures and movement process are described by the POD method. The relationship between the jamming point and crowd path is analyzed. And the pressure buffer recognized in this paper is consistent with Helbing's high-pressure region. The results revealed here may be helpful for facilities design, modeling crowded scenarios and the organization of large-scale mass activities.

  3. Dose coverage calculation using a statistical shape model—applied to cervical cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Tilly, David; van de Schoot, Agustinus J. A. J.; Grusell, Erik; Bel, Arjan; Ahnesjö, Anders

    2017-05-01

    A comprehensive methodology for treatment simulation and evaluation of dose coverage probabilities is presented where a population based statistical shape model (SSM) provide samples of fraction specific patient geometry deformations. The learning data consists of vector fields from deformable image registration of repeated imaging giving intra-patient deformations which are mapped to an average patient serving as a common frame of reference. The SSM is created by extracting the most dominating eigenmodes through principal component analysis of the deformations from all patients. The sampling of a deformation is thus reduced to sampling weights for enough of the most dominating eigenmodes that describe the deformations. For the cervical cancer patient datasets in this work, we found seven eigenmodes to be sufficient to capture 90% of the variance in the deformations of the, and only three eigenmodes for stability in the simulated dose coverage probabilities. The normality assumption of the eigenmode weights was tested and found relevant for the 20 most dominating eigenmodes except for the first. Individualization of the SSM is demonstrated to be improved using two deformation samples from a new patient. The probabilistic evaluation provided additional information about the trade-offs compared to the conventional single dataset treatment planning.

  4. Statistical and sampling issues when using multiple particle tracking

    NASA Astrophysics Data System (ADS)

    Savin, Thierry; Doyle, Patrick S.

    2007-08-01

    Video microscopy can be used to simultaneously track several microparticles embedded in a complex material. The trajectories are used to extract a sample of displacements at random locations in the material. From this sample, averaged quantities characterizing the dynamics of the probes are calculated to evaluate structural and/or mechanical properties of the assessed material. However, the sampling of measured displacements in heterogeneous systems is singular because the volume of observation with video microscopy is finite. By carefully characterizing the sampling design in the experimental output of the multiple particle tracking technique, we derive estimators for the mean and variance of the probes’ dynamics that are independent of the peculiar statistical characteristics. We expose stringent tests of these estimators using simulated and experimental complex systems with a known heterogeneous structure. Up to a certain fundamental limitation, which we characterize through a material degree of sampling by the embedded probe tracking, these estimators can be applied to quantify the heterogeneity of a material, providing an original and intelligible kind of information on complex fluid properties. More generally, we show that the precise assessment of the statistics in the multiple particle tracking output sample of observations is essential in order to provide accurate unbiased measurements.

  5. Atomic-scale phase composition through multivariate statistical analysis of atom probe tomography data.

    PubMed

    Keenan, Michael R; Smentkowski, Vincent S; Ulfig, Robert M; Oltman, Edward; Larson, David J; Kelly, Thomas F

    2011-06-01

    We demonstrate for the first time that multivariate statistical analysis techniques can be applied to atom probe tomography data to estimate the chemical composition of a sample at the full spatial resolution of the atom probe in three dimensions. Whereas the raw atom probe data provide the specific identity of an atom at a precise location, the multivariate results can be interpreted in terms of the probabilities that an atom representing a particular chemical phase is situated there. When aggregated to the size scale of a single atom (∼0.2 nm), atom probe spectral-image datasets are huge and extremely sparse. In fact, the average spectrum will have somewhat less than one total count per spectrum due to imperfect detection efficiency. These conditions, under which the variance in the data is completely dominated by counting noise, test the limits of multivariate analysis, and an extensive discussion of how to extract the chemical information is presented. Efficient numerical approaches to performing principal component analysis (PCA) on these datasets, which may number hundreds of millions of individual spectra, are put forward, and it is shown that PCA can be computed in a few seconds on a typical laptop computer.

  6. Validation of the Malay version of the Inventory of Functional Status after Childbirth questionnaire.

    PubMed

    Noor, Norhayati Mohd; Aziz, Aniza Abd; Mostapa, Mohd Rosmizaki; Awang, Zainudin

    2015-01-01

    This study was designed to examine the psychometric properties of Malay version of the Inventory of Functional Status after Childbirth (IFSAC). A cross-sectional study. A total of 108 postpartum mothers attending Obstetrics and Gynaecology Clinic, in a tertiary teaching hospital in Malaysia, were involved. Construct validity and internal consistency were performed after the translation, content validity, and face validity process. The data were analyzed using Analysis of Moment Structure version 18 and Statistical Packages for the Social Sciences version 20. The final model consists of four constructs, namely, infant care, personal care, household activities, and social and community activities, with 18 items demonstrating acceptable factor loadings, domain to domain correlation, and best fit (Chi-squared/degree of freedom = 1.678; Tucker-Lewis index = 0.923; comparative fit index = 0.936; and root mean square error of approximation = 0.080). Composite reliability and average variance extracted of the domains ranged from 0.659 to 0.921 and from 0.499 to 0.628, respectively. The study suggested that the four-factor model with 18 items of the Malay version of IFSAC was acceptable to be used to measure functional status after childbirth because it is valid, reliable, and simple.

  7. A complete passive blind image copy-move forensics scheme based on compound statistics features.

    PubMed

    Peng, Fei; Nie, Yun-ying; Long, Min

    2011-10-10

    Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Reliability and Construct Validity of the Portuguese Version of the Psychological Capital Questionnaire.

    PubMed

    Antunes, Ana Cristina; Caetano, António; Pina E Cunha, Miguel

    2017-06-01

    The Psychological Capital Questionnaire (PCQ) is the most commonly used measure for assessing psychological capital in work settings. Although several studies confirmed its factorial validity, most validation studies only examined the four-factor structure preconized by Luthans, Youssef, and Avolio, not attending to empirical evidence on alternative factorial structures. The present study aimed to test the psychometric properties of the Portuguese version of the PCQ, by using two independent samples (NS1 = 542; NS2 = 115) of Portuguese employees. We conducted a series of confirmatory factor analyses and found that, unlike previous findings, a five-factor solution of the PCQ best fitted the data. The evidence obtained also supported the existence of a second-order factor, psychological capital. The coefficients of internal consistency, as measured by Cronbach's alpha, were adequate and test-retest reliability suggested that the PCQ presented a lower stability than personality factors. Convergent validity, assessed with average variance extracted, revealed problems in the optimism subscale. The discriminant validity of the PCQ was confirmed by its correlations with Positive and Negative Affect and Big Five personality factors. Hierarchical regression analyses showed that this measure has incremental validity over personality and affect when predicting job performance.

  9. [Genetic diversity of wild Cynodon dactylon germplasm detected by SRAP markers].

    PubMed

    Yi, Yang-Jie; Zhang, Xin-Quan; Huang, Lin-Kai; Ling, Yao; Ma, Xiao; Liu, Wei

    2008-01-01

    Sequence-related amplified polymorphism (SRAP) molecular markers were used to detect the genetic diversity of 32 wild accessions of Cynodon dactylon collected from Sichuan, Chongqing, Guizhou and Tibet, China. The following results were obtained. (1) Fourteen primer pairs produced 132 polymorphic bands, averaged 9.4 bands per primer pair. The percentage of polymorphic bands in average was 79.8%. The Nei's genetic similarity coefficient of the tested accessions ranged from 0.591 to 0.957, and the average Nei's coefficient was 0.759. These results suggested that there was rich genetic diversity among the wild resources of Cynodon dactylon tested. (2) Thirty two wild accessions were clustered into four groups. Moreover, the accessions from the same origin frequently clustered into one group. The findings implied that a correlation among the wild resources, geographical and ecological environment. (3) Genetic differentiation between and within six eco-geographical groups of C. dactylon was estimated by Shannon's diversity index, which showed that 65.56% genetic variance existed within group, and 34.44% genetic variance was among groups. (4) Based on Nei's unbiased measures of genetic identity, UPGMA cluster analysis measures of six eco-geographical groups of Cynodon dactylon, indicated that there was a correlation between genetic differentiation and eco-geographical habits among the groups.

  10. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting

    PubMed Central

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2015-01-01

    Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982

  11. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.

    PubMed

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2016-06-01

    The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.

  12. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  13. Interpreting consumer preferences: physicohedonic and psychohedonic models yield different information in a coffee-flavored dairy beverage.

    PubMed

    Li, Bangde; Hayes, John E; Ziegler, Gregory R

    2014-09-01

    Designed experiments provide product developers feedback on the relationship between formulation and consumer acceptability. While actionable, this approach typically assumes a simple psychophysical relationship between ingredient concentration and perceived intensity. This assumption may not be valid, especially in cases where perceptual interactions occur. Additional information can be gained by considering the liking-intensity function, as single ingredients can influence more than one perceptual attribute. Here, 20 coffee-flavored dairy beverages were formulated using a fractional mixture design that varied the amount of coffee extract, fluid milk, sucrose, and water. Overall liking ( liking ) was assessed by 388 consumers using an incomplete block design (4 out of 20 prototypes) to limit fatigue; all participants also rated the samples for intensity of coffee flavor (coffee) , milk flavor (milk) , sweetness (sweetness) and thickness (thickness) . Across product means, the concentration variables explained 52% of the variance in liking in main effects multiple regression. The amount of sucrose (β = 0.46) and milk (β = 0.46) contributed significantly to the model (p's <0.02) while coffee extract (β = -0.17; p = 0.35) did not. A comparable model based on the perceived intensity explained 63% of the variance in mean liking ; sweetness (β = 0.53) and milk (β = 0.69) contributed significantly to the model (p's <0.04), while the influence of coffee flavor (β = 0.48) was positive but marginally (p = 0.09). Since a strong linear relationship existed between coffee extract concentration and coffee flavor, this discrepancy between the two models was unexpected, and probably indicates that adding more coffee extract also adds a negative attribute, e.g. too much bitterness. In summary, modeling liking as a function of both perceived intensity and physical concentration provides a richer interpretation of consumer data.

  14. Interpreting consumer preferences: physicohedonic and psychohedonic models yield different information in a coffee-flavored dairy beverage

    PubMed Central

    Li, Bangde; Hayes, John E.; Ziegler, Gregory R.

    2014-01-01

    Designed experiments provide product developers feedback on the relationship between formulation and consumer acceptability. While actionable, this approach typically assumes a simple psychophysical relationship between ingredient concentration and perceived intensity. This assumption may not be valid, especially in cases where perceptual interactions occur. Additional information can be gained by considering the liking-intensity function, as single ingredients can influence more than one perceptual attribute. Here, 20 coffee-flavored dairy beverages were formulated using a fractional mixture design that varied the amount of coffee extract, fluid milk, sucrose, and water. Overall liking (liking) was assessed by 388 consumers using an incomplete block design (4 out of 20 prototypes) to limit fatigue; all participants also rated the samples for intensity of coffee flavor (coffee), milk flavor (milk), sweetness (sweetness) and thickness (thickness). Across product means, the concentration variables explained 52% of the variance in liking in main effects multiple regression. The amount of sucrose (β = 0.46) and milk (β = 0.46) contributed significantly to the model (p’s <0.02) while coffee extract (β = −0.17; p = 0.35) did not. A comparable model based on the perceived intensity explained 63% of the variance in mean liking; sweetness (β = 0.53) and milk (β = 0.69) contributed significantly to the model (p’s <0.04), while the influence of coffee flavor (β = 0.48) was positive but marginally (p = 0.09). Since a strong linear relationship existed between coffee extract concentration and coffee flavor, this discrepancy between the two models was unexpected, and probably indicates that adding more coffee extract also adds a negative attribute, e.g. too much bitterness. In summary, modeling liking as a function of both perceived intensity and physical concentration provides a richer interpretation of consumer data. PMID:25024507

  15. "Transfer Shock" or "Transfer Ecstasy?"

    ERIC Educational Resources Information Center

    Nickens, John M.

    The alleged characteristic drop in grade point average (GPA) of transfer students and the subsequent rise in GPA was investigated in this study. No statistically significant difference was found in first term junior year GPA between junior college transfers and native Florida State University students after the variance accounted for by the…

  16. The Effect of Alkaline Earth Metal on the Cesium Loading of Ionsiv(R) IE-910 and IE-911

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fondeur, F.F.

    2001-01-16

    This study investigated the effect of variances in alkaline earth metal concentrations on cesium loading of IONSIV(R) IE-911. The study focused on Savannah River Site (SRS) ''average'' solution with varying amounts of calcium, barium and magnesium.

  17. A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using

  18. The Effectiveness of the SSHA in Improving Prediction of Academic Achievement.

    ERIC Educational Resources Information Center

    Wikoff, Richard L.; Kafka, Gene F.

    1981-01-01

    Investigated the effectiveness of the Survey of Study Habits (SSHA) in improving prediction of achievement. The American College Testing Program English and mathematics subtests were good predictors of gradepoint average. The SSHA subtests accounted for an additional 3 percent of the variance. Sex differences were noted. (Author)

  19. Average combination difference morphological filters for fault feature extraction of bearing

    NASA Astrophysics Data System (ADS)

    Lv, Jingxiang; Yu, Jianbo

    2018-02-01

    In order to extract impulse components from vibration signals with much noise and harmonics, a new morphological filter called average combination difference morphological filter (ACDIF) is proposed in this paper. ACDIF constructs firstly several new combination difference (CDIF) operators, and then integrates the best two CDIFs as the final morphological filter. This design scheme enables ACIDF to extract positive and negative impacts existing in vibration signals to enhance accuracy of bearing fault diagnosis. The length of structure element (SE) that affects the performance of ACDIF is determined adaptively by a new indicator called Teager energy kurtosis (TEK). TEK further improves the effectiveness of ACDIF for fault feature extraction. Experimental results on the simulation and bearing vibration signals demonstrate that ACDIF can effectively suppress noise and extract periodic impulses from bearing vibration signals.

  20. Upper limb prosthesis use and abandonment: a survey of the last 25 years.

    PubMed

    Biddiss, Elaine A; Chau, Tom T

    2007-09-01

    This review presents an analytical and comparative survey of upper limb prosthesis acceptance and abandonment as documented over the past 25 years, detailing areas of consumer dissatisfaction and ongoing technological advancements. English-language articles were identified in a search of Ovid, PubMed, and ISI Web of Science (1980 until February 2006) for key words upper limb and prosthesis. Articles focused on upper limb prostheses and addressing: (i) Factors associated with abandonment; (ii) Rejection rates; (iii) Functional analyses and patterns of wear; and (iv) Consumer satisfaction, were extracted with the exclusion of those detailing tools for outcome measurement, case studies, and medical procedures. Approximately 200 articles were included in the review process with 40 providing rates of prosthesis rejection. Quantitative measures of population characteristics, study methodology, and prostheses in use were extracted from each article. Mean rejection rates of 45% and 35% were observed in the literature for body-powered and electric prostheses respectively in pediatric populations. Significantly lower rates of rejection for both body-powered (26%) and electric (23%) devices were observed in adult populations while the average incidence of non-wear was similar for pediatric (16%) and adult (20%) populations. Documented rates of rejection exhibit a wide range of variance, possibly due to the heterogeneous samples involved and methodological differences between studies. Future research should comprise of controlled, multifactor studies adopting standardized outcome measures in order to promote comprehensive understanding of the factors affecting prosthesis use and abandonment. An enhanced understanding of these factors is needed to optimize prescription practices, guide design efforts, and satiate demand for evidence-based measures of intervention.

  1. Understanding adherence to therapeutic guidelines: a multilevel analysis of statin prescription in the Skaraborg Primary Care Database.

    PubMed

    Hjerpe, Per; Ohlsson, Henrik; Lindblad, Ulf; Boström, Kristina Bengtsson; Merlo, Juan

    2011-04-01

    In Skaraborg, Sweden, the economic responsibility for tax-financed prescription drug costs was transferred from the regional administrative level to the local level (health care centre; HCC) in 2003. The aim of this study was to investigate the impact of this decentralization of economic responsibility on adherence to guidelines for prescribing lipid-lowering drugs. Data from all 24 public HCCs in Skaraborg on prescriptions for lipid-lowering drugs during 2003 and 2005 were extracted from the Skaraborg Primary Care Database (SPCD). Multilevel regression analysis (MLRA) was used to disentangle the variances at different levels of data (patient, physician, HCC). The outcome variable on the patient level was the prescription of the recommended statin (yes/no). Sex and age of the patients and sex, age and occupational status of the physician were included as fixed effects. The variance was expressed as the median odds ratio (MOR). The prevalence of adherence to guidelines for the prescription of statins increased from 77% in 2003 to 84% in 2005. The MLRA showed that in 2003 the variance was equally distributed between the HCC and physician levels (MOR(HCC2003)=1.89 vs. MOR(PHYSICIAN2003)=1.88). The variance between physicians and between HCCs decreased considerably between 2003 and 2005. The inclusion of individual and physician characteristics did not explain any of the remaining variance. The decentralized budget appears to have increased adherence to guidelines and reduced inefficient variation in prescribing.

  2. Characterisation of pectins extracted from banana peels (Musa AAA) under different conditions using an experimental design.

    PubMed

    Happi Emaga, Thomas; Ronkart, Sébastien N; Robert, Christelle; Wathelet, Bernard; Paquot, Michel

    2008-05-15

    An experimental design was used to study the influence of pH (1.5 and 2.0), temperature (80 and 90°C) and time (1 and 4h) on extraction of pectin from banana peels (Musa AAA). Yield of extracted pectins, their composition (neutral sugars, galacturonic acid, and degree of esterification) and some macromolecular characteristics (average molecular weight, intrinsic viscosity) were determined. It was found that extraction pH was the most important parameter influencing yield and pectin chemical composition. Lower pH values negatively affected the galacturonic acid content of pectin, but increased the pectin yield. The values of degree of methylation decreased significantly with increasing temperature and time of extraction. The average molecular weight ranged widely from 87 to 248kDa and was mainly influenced by pH and extraction time. Copyright © 2007 Elsevier Ltd. All rights reserved.

  3. Extraction of liver volumetry based on blood vessel from the portal phase CT dataset

    NASA Astrophysics Data System (ADS)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Utsunomiya, Tohru; Shimada, Mitsuo

    2012-02-01

    At liver surgery planning stage, the liver volumetry would be essential for surgeons. Main problem at liver extraction is the wide variability of livers in shapes and sizes. Since, hepatic blood vessels structure varies from a person to another and covers liver region, the present method uses that information for extraction of liver in two stages. The first stage is to extract abdominal blood vessels in the form of hepatic and nonhepatic blood vessels. At the second stage, extracted vessels are used to control extraction of liver region automatically. Contrast enhanced CT datasets at only the portal phase of 50 cases is used. Those data include 30 abnormal livers. A reference for all cases is done through a comparison of two experts labeling results and correction of their inter-reader variability. Results of the proposed method agree with the reference at an average rate of 97.8%. Through application of different metrics mentioned at MICCAI workshop for liver segmentation, it is found that: volume overlap error is 4.4%, volume difference is 0.3%, average symmetric distance is 0.7 mm, Root mean square symmetric distance is 0.8 mm, and maximum distance is 15.8 mm. These results represent the average of overall data and show an improved accuracy compared to current liver segmentation methods. It seems to be a promising method for extraction of liver volumetry of various shapes and sizes.

  4. Extraction of human gait signatures: an inverse kinematic approach using Groebner basis theory applied to gait cycle analysis

    NASA Astrophysics Data System (ADS)

    Barki, Anum; Kendricks, Kimberly; Tuttle, Ronald F.; Bunker, David J.; Borel, Christoph C.

    2013-05-01

    This research highlights the results obtained from applying the method of inverse kinematics, using Groebner basis theory, to the human gait cycle to extract and identify lower extremity gait signatures. The increased threat from suicide bombers and the force protection issues of today have motivated a team at Air Force Institute of Technology (AFIT) to research pattern recognition in the human gait cycle. The purpose of this research is to identify gait signatures of human subjects and distinguish between subjects carrying a load to those subjects without a load. These signatures were investigated via a model of the lower extremities based on motion capture observations, in particular, foot placement and the joint angles for subjects affected by carrying extra load on the body. The human gait cycle was captured and analyzed using a developed toolkit consisting of an inverse kinematic motion model of the lower extremity and a graphical user interface. Hip, knee, and ankle angles were analyzed to identify gait angle variance and range of motion. Female subjects exhibited the most knee angle variance and produced a proportional correlation between knee flexion and load carriage.

  5. VO2 and VCO2 variabilities through indirect calorimetry instrumentation.

    PubMed

    Cadena-Méndez, Miguel; Escalante-Ramírez, Boris; Azpiroz-Leehan, Joaquín; Infante-Vázquez, Oscar

    2013-01-01

    The aim of this paper is to understand how to measure the VO2 and VCO2 variabilities in indirect calorimetry (IC) since we believe they can explain the high variation in the resting energy expenditure (REE) estimation. We propose that variabilities should be separately measured from the VO2 and VCO2 averages to understand technological differences among metabolic monitors when they estimate the REE. To prove this hypothesis the mixing chamber (MC) and the breath-by-breath (BbB) techniques measured the VO2 and VCO2 averages and their variabilities. Variances and power spectrum energies in the 0-0.5 Hertz band were measured to establish technique differences in steady and non-steady state. A hybrid calorimeter with both IC techniques studied a population of 15 volunteers that underwent the clino-orthostatic maneuver in order to produce the two physiological stages. The results showed that inter-individual VO2 and VCO2 variabilities measured as variances were negligible using the MC while variabilities measured as spectral energies using the BbB underwent 71 and 56% (p < 0.05), increase respectively. Additionally, the energy analysis showed an unexpected cyclic rhythm at 0.025 Hertz only during the orthostatic stage, which is new physiological information, not reported previusly. The VO2 and VCO2 inter-individual averages increased to 63 and 39% by the MC (p < 0.05) and 32 and 40% using the BbB (p < 0.1), respectively, without noticeable statistical differences among techniques. The conclusions are: (a) metabolic monitors should simultaneously include the MC and the BbB techniques to correctly interpret the steady or non-steady state variabilities effect in the REE estimation, (b) the MC is the appropriate technique to compute averages since it behaves as a low-pass filter that minimizes variances, (c) the BbB is the ideal technique to measure the variabilities since it can work as a high-pass filter to generate discrete time series able to accomplish spectral analysis, and (d) the new physiological information in the VO2 and VCO2 variabilities can help to understand why metabolic monitors with dissimilar IC techniques give different results in the REE estimation.

  6. Temperature variation effects on stochastic characteristics for low-cost MEMS-based inertial sensor error

    NASA Astrophysics Data System (ADS)

    El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.

    2007-11-01

    We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.

  7. The magnitude and colour of noise in genetic negative feedback systems.

    PubMed

    Voliotis, Margaritis; Bowsher, Clive G

    2012-08-01

    The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.

  8. The human as a detector of changes in variance and bandwidth

    NASA Technical Reports Server (NTRS)

    Curry, R. E.; Govindaraj, T.

    1977-01-01

    The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.

  9. Turbulent variance characteristics of temperature and humidity over a non-uniform land surface for an agricultural ecosystem in China

    NASA Astrophysics Data System (ADS)

    Gao, Z. Q.; Bian, L. G.; Chen, Z. G.; Sparrow, M.; Zhang, J. H.

    2006-05-01

    This paper describes the application of the variance method for flux estimation over a mixed agricultural region in China. Eddy covariance and flux variance measurements were conducted in a near-surface layer over a non-uniform land surface in the central plain of China from 7 June to 20 July 2002. During this period, the mean canopy height was about 0.50 m. The study site consisted of grass (10% of area), beans (15%), corn (15%) and rice (60%). Under unstable conditions, the standard deviations of temperature and water vapor density (normalized by appropriate scaling parameters), observed by a single instrument, followed the Monin-Obukhov similarity theory. The similarity constants for heat (C-T) and water vapor (C-q) were 1.09 and 1.49, respectively. In comparison with direct measurements using eddy covariance techniques, the flux variance method, on average, underestimated sensible heat flux by 21% and latent heat flux by 24%, which may be attributed to the fact that the observed slight deviations (20% or 30% at most) of the similarity "constants" may be within the expected range of variation of a single instrument from the generally-valid relations.

  10. Further study of terrain effects on the mesoscale spectrum of atmospheric motions

    NASA Technical Reports Server (NTRS)

    Jasperson, W. H.; Nastrom, G. D.; Fritts, D. C.

    1990-01-01

    Wind and temperature data collected on commercial airliners are used to investigate the effects of underlying terrain on mesoscale variability. These results expand upon those of Nastrom et al., by including all available data from the Global Atmospheric Sampling Program (GASP) and by more closely focusing on the coupling of variance with the roughness of the underlying terrain over mountainous regions. The earlier results, showing that variances are larger over mountains than over oceans or plains, with greatest increases at wavelengths below about 80 km, are confirmed. Statistical tests are used to confirm that these differences are highly significant. Over mountainous regions the roughness of the underlying terrain was parameterized from topographic data and it was found that variances are highly correlated with roughness and, in the troposphere, with background windspeed. Average variances over the roughest terrain areas range up to about ten times larger than those over the oceans. These results are found to follow the scaling with stability predicted in the framework of linenar gravity wave theory. The implications of these results for vertical transports of momentum and energy, assuming they are due to gravity waves and considering the effects of intermittency and anisotroy, are also discussed.

  11. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  12. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE PAGES

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    2017-04-06

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  13. More controlling child-feeding practices are found among parents of boys with an average body mass index compared with parents of boys with a high body mass index.

    PubMed

    Brann, Lynn S; Skinner, Jean D

    2005-09-01

    To determine if differences existed in mothers' and fathers' perceptions of their sons' weight, controlling child-feeding practices (ie, restriction, monitoring, and pressure to eat), and parenting styles (ie, authoritarian, authoritative, and permissive) by their sons' body mass index (BMI). One person (L.S.B.) interviewed mothers and boys using validated questionnaires and measured boys' weight and height; fathers completed questionnaires independently. Subjects were white, preadolescent boys and their parents. Boys were grouped by their BMI into an average BMI group (n=25; BMI percentile between 33rd and 68th) and a high BMI group (n=24; BMI percentile > or = 85th). Multivariate analyses of variance and analyses of variance. Mothers and fathers of boys with a high BMI saw their sons as more overweight (mothers P=.03, fathers P=.01), were more concerned about their sons' weight (P<.0001, P=.004), and used pressure to eat with their sons less often than mothers and fathers of boys with an average BMI (P<.0001, P<.0001). In addition, fathers of boys with a high BMI monitored their sons' eating less often than fathers of boys with an average BMI (P=.006). No differences were found in parenting by boys' BMI groups for either mothers or fathers. More controlling child-feeding practices were found among mothers (pressure to eat) and fathers (pressure to eat and monitoring) of boys with an average BMI compared with parents of boys with a high BMI. A better understanding of the relationships between feeding practices and boys' weight is necessary. However, longitudinal research is needed to provide evidence of causal association.

  14. Fine-grained information extraction from German transthoracic echocardiography reports.

    PubMed

    Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank

    2015-11-12

    Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research.

  15. Shifting patterns of variance in adolescent alcohol use: Testing consumption as a developing trait-state.

    PubMed

    Nealis, Logan J; Thompson, Kara D; Krank, Marvin D; Stewart, Sherry H

    2016-04-01

    While average rates of change in adolescent alcohol consumption are frequently studied, variability arising from situational and dispositional influences on alcohol use has been comparatively neglected. We used variance decomposition to test differences in variability resulting from year-to-year fluctuations in use (i.e., state-like) and from stable individual differences (i.e., trait-like) using data from the Project on Adolescent Trajectories and Health (PATH), a cohort-sequential study spanning grades 7 to 11 using three cohorts starting in grades seven, eight, and nine, respectively. We tested variance components for alcohol volume, frequency, and quantity in the overall sample, and changes in components over time within each cohort. Sex differences were tested. Most variability in alcohol use reflected state-like variation (47-76%), with a relatively smaller proportion of trait-like variation (19-36%). These proportions shifted across cohorts as youth got older, with increases in trait-like variance from early adolescence (14-30%) to later adolescence (30-50%). Trends were similar for males and females, although females showed higher trait-like variance in alcohol frequency than males throughout development (26-43% vs. 11-25%). For alcohol volume and frequency, males showed the greatest increase in trait-like variance earlier in development (i.e., grades 8-10) compared to females (i.e., grades 9-11). The relative strength of situational and dispositional influences on adolescent alcohol use has important implications for preventative interventions. Interventions should ideally target problematic alcohol use before it becomes more ingrained and trait-like. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Empirical single sample quantification of bias and variance in Q-ball imaging.

    PubMed

    Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A

    2018-02-06

    The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.

  17. Experimental study on an FBG strain sensor

    NASA Astrophysics Data System (ADS)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  18. Dental Students' Perceptions of Risk Factors for Musculoskeletal Disorders: Adapting the Job Factors Questionnaire for Dentistry.

    PubMed

    Presoto, Cristina D; Wajngarten, Danielle; Domingos, Patrícia A S; Campos, Juliana A D B; Garcia, Patrícia P N S

    2018-01-01

    The aims of this study were to adapt the Job Factors Questionnaire to the field of dentistry, evaluate its psychometric properties, evaluate dental students' perceptions of work/study risk factors for musculoskeletal disorders, and determine the influence of gender and academic level on those perceptions. All 580 students enrolled in two Brazilian dental schools in 2015 were invited to participate in the study. A three-factor structure (Repetitiveness, Work Posture, and External Factors) was tested through confirmatory factor analysis. Convergent validity was estimated using the average variance extracted (AVE), discriminant validity was based on the correlational analysis of the factors, and reliability was assessed. A causal model was created using structural equation modeling to evaluate the influence of gender and academic level on students' perceptions. A total of 480 students completed the questionnaire for an 83% response rate. The responding students' average age was 21.6 years (SD=2.98), and 74.8% were women. Higher scores were observed on the Work Posture factor items. The refined model presented proper fit to the studied sample. Convergent validity was compromised only for External Factors (AVE=0.47), and discriminant validity was compromised for Work Posture and External Factors (r 2 =0.69). Reliability was adequate. Academic level did not have a significant impact on the factors, but the women students exhibited greater perception. Overall, the adaptation resulted in a useful instrument for assessing perceptions of risk factors for musculoskeletal disorders. Gender was found to significantly influence all three factors, with women showing greater perception of the risk factors.

  19. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    PubMed

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  20. Errors in radial velocity variance from Doppler wind lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  1. Errors in radial velocity variance from Doppler wind lidar

    DOE PAGES

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...

    2016-08-29

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  2. Do stochastic inhomogeneities affect dark-energy precision measurements?

    PubMed

    Ben-Dayan, I; Gasperini, M; Marozzi, G; Nugier, F; Veneziano, G

    2013-01-11

    The effect of a stochastic background of cosmological perturbations on the luminosity-redshift relation is computed to second order through a recently proposed covariant and gauge-invariant light-cone averaging procedure. The resulting expressions are free from both ultraviolet and infrared divergences, implying that such perturbations cannot mimic a sizable fraction of dark energy. Different averages are estimated and depend on the particular function of the luminosity distance being averaged. The energy flux being minimally affected by perturbations at large z is proposed as the best choice for precision estimates of dark-energy parameters. Nonetheless, its irreducible (stochastic) variance induces statistical errors on Ω(Λ)(z) typically lying in the few-percent range.

  3. New software for raw data mask processing increases diagnostic ability of myocardial SPECT imaging.

    PubMed

    Tanaka, Ryo; Yoshioka, Katsunori; Seino, Kazue; Ohba, Muneo; Nakamura, Tomoharu; Shimada, Katsuhiko

    2011-05-01

    Increased activity of myocardial perfusion tracer technetium-99m in liver and hepatobiliary system causes streak artifacts, which may affect clinical diagnosis. We developed a mask-processing tool for raw data generated using technetium-99m as a myocardial perfusion tracer. Here, we describe improvements in image quality under the influence of artifacts caused by high accumulation in other organs. A heart phantom (RH-2) containing 15 MBq of pertechnetate was defined as model A. Model B was designed in the same phantom containing ten times of cardiac radioactivity overlapping with other organs. Variance in the vertical profile count in the lower part of the myocardial inferior wall and in the myocardial circumferential profile curve were investigated in a phantom and clinical cases using our raw data masking (RDM) software. The profile variances at lower parts of myocardial inferior walls were 965.43 in model A, 1390.11 in model B and 815.85 in B-RDM. The mean ± SD of myocardial circumferential profile curves were 83.91 ± 7.39 in model A, 69.61 ± 11.45 in model B and 82.68 ± 9.71 in model B-RDM. For 11 clinical images with streak artifacts, the average of the variance significantly differed between with and without RDM (3.95 vs. 21.05; P < 0.05). For 50 clinical images with hepatic accumulation artifacts, the average of the variance on vertical profiles on images with and without RDM significantly differed (5.99 vs. 15.59; P < 0.01). Furthermore, when a segment with <60% uptake in polar maps was defined as abnormal, the average extent score of 1 h (Tc-1h), 5 min of RDM (Tc-0h-RDM) and 5 min of non-RDM (Tc-0h-non-RDM) were 2.25 ± 3.12, 2.35 ± 3.16, and 1.37 ± 2.41, respectively. Differences were significant between Tc-1h and Tc-0h-non-RDM (P < 0.005) but not between Tc-1h and Tc-0h-RDM. Batch processing was enabled in all frames by shifting the myocardium to the center of rotation using this software. The waiting time between infusion and image acquisition should be decreased, thus reducing patient burden and improving the diagnostic ability of the procedure.

  4. SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, J; Budzevich, M; Zhang, G

    2014-06-15

    Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. Amore » total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256×256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.« less

  5. Development of a Sonar Oil Tanker Cargo Measurement System.

    DTIC Science & Technology

    1980-08-01

    used at any one time is dependent upon the capacity of the stripping system. The operation is conducted to strip at least as fast , if not faster, than...operational mode, measure the average value and variance of the thickness of an oil layer on an intermit - tent sampling basis. Laboratory testing

  6. Sex Differences in Arithmetical Performance Scores: Central Tendency and Variability

    ERIC Educational Resources Information Center

    Martens, R.; Hurks, P. P. M.; Meijs, C.; Wassenberg, R.; Jolles, J.

    2011-01-01

    The present study aimed to analyze sex differences in arithmetical performance in a large-scale sample of 390 children (193 boys) frequenting grades 1-9. Past research in this field has focused primarily on average performance, implicitly assuming homogeneity of variance, for which support is scarce. This article examined sex differences in…

  7. The Role of Behavioral and Cognitive Cultural Orientation on Mexican American College Students' Life Satisfaction

    ERIC Educational Resources Information Center

    Ojeda, Lizette; Edwards, Lisa M.; Hardin, Erin E.; Piña-Watson, Brandy

    2014-01-01

    We examined the role of behavioral (acculturation and enculturation) and cognitive cultural orientation (independent and interdependent self-construal) on Mexican American college students' life satisfaction. Analyses explained 28% of the variance in life satisfaction, with social class, grade point average, and independent self-construal being…

  8. Academic Self-Perception and Its Relationship to Academic Performance

    ERIC Educational Resources Information Center

    Stringer, Ronald W.; Heath, Nancy

    2008-01-01

    One hundred and fifty-five students (average age, 10 years 7 months) were initially tested on reading, arithmetic, and academic self-perception. One year later they were tested again. Initial academic scores accounted for a large proportion of the variance in later academic scores. The children's self-perceptions of academic competence accounted…

  9. Post-Modeling Histogram Matching of Maps Produced Using Regression Trees

    Treesearch

    Andrew J. Lister; Tonya W. Lister

    2006-01-01

    Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...

  10. Utilization of point soil moisture measurements for field scale soil moisture averages and variances in agricultural landscapes

    USDA-ARS?s Scientific Manuscript database

    Soil moisture is a key variable in understanding the hydrologic processes and energy fluxes at the land surface. In spite of new technologies for in-situ soil moisture measurements and increased availability of remotely sensed soil moisture data, scaling issues between soil moisture observations and...

  11. Origins and Consequences of Schools' Organizational Culture for Student Achievement

    ERIC Educational Resources Information Center

    Dumay, Xavier

    2009-01-01

    Purpose: Most studies on the impact of school culture focus only on teachers' average perceptions and neglect the possibility that a meaningful increment to the prediction of school effectiveness might be provided by the variance in teachers' culture perceptions. The objectives of this article are to (a) better understand how teachers' collective…

  12. Mapped Plot Patch Size Estimates

    Treesearch

    Paul C. Van Deusen

    2005-01-01

    This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...

  13. 40 CFR 142.60 - Variances from the maximum contaminant level for total trihalomethanes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS... average basis. (b) The Administrator in a state that does not have primary enforcement responsibility or a... community water system to install and/or use any treatment method identified in § 142.60(a) as a condition...

  14. 40 CFR 142.60 - Variances from the maximum contaminant level for total trihalomethanes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS... average basis. (b) The Administrator in a state that does not have primary enforcement responsibility or a... community water system to install and/or use any treatment method identified in § 142.60(a) as a condition...

  15. STOCK Market Differences in Correlation-Based Weighted Network

    NASA Astrophysics Data System (ADS)

    Youn, Janghyuk; Lee, Junghoon; Chang, Woojin

    We examined the sector dynamics of Korean stock market in relation to the market volatility. The daily price data of 360 stocks for 5019 trading days (from January, 1990 to August, 2008) in Korean stock market are used. We performed the weighted network analysis and employed four measures: the average, the variance, the intensity, and the coherence of network weights (absolute values of stock return correlations) to investigate the network structure of Korean stock market. We performed regression analysis using the four measures in the seven major industry sectors and the market (seven sectors combined). We found that the average, the intensity, and the coherence of sector (subnetwork) weights increase as market becomes volatile. Except for the "Financials" sector, the variance of sector weights also grows as market volatility increases. Based on the four measures, we can categorize "Financials," "Information Technology" and "Industrials" sectors into one group, and "Materials" and "Consumer Discretionary" sectors into another group. We investigated the distributions of intrasector and intersector weights for each sector and found the differences in "Financials" sector are most distinct.

  16. An analysis of scatter decomposition

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1990-01-01

    A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.

  17. Human preference for individual colors

    NASA Astrophysics Data System (ADS)

    Palmer, Stephen E.; Schloss, Karen B.

    2010-02-01

    Color preference is an important aspect of human behavior, but little is known about why people like some colors more than others. Recent results from the Berkeley Color Project (BCP) provide detailed measurements of preferences among 32 chromatic colors as well as other relevant aspects of color perception. We describe the fit of several color preference models, including ones based on cone outputs, color-emotion associations, and Palmer and Schloss's ecological valence theory. The ecological valence theory postulates that color serves an adaptive "steering' function, analogous to taste preferences, biasing organisms to approach advantageous objects and avoid disadvantageous ones. It predicts that people will tend to like colors to the extent that they like the objects that are characteristically that color, averaged over all such objects. The ecological valence theory predicts 80% of the variance in average color preference ratings from the Weighted Affective Valence Estimates (WAVEs) of correspondingly colored objects, much more variance than any of the other models. We also describe how hue preferences for single colors differ as a function of gender, expertise, culture, social institutions, and perceptual experience.

  18. Understanding the gap between cognitive abilities and daily living skills in adolescents with autism spectrum disorders with average intelligence.

    PubMed

    Duncan, Amie W; Bishop, Somer L

    2015-01-01

    Daily living skills standard scores on the Vineland Adaptive Behavior Scales-2nd edition were examined in 417 adolescents from the Simons Simplex Collection. All participants had at least average intelligence and a diagnosis of autism spectrum disorder. Descriptive statistics and binary logistic regressions were used to examine the prevalence and predictors of a "daily living skills deficit," defined as below average daily living skills in the context of average intelligence quotient. Approximately half of the adolescents were identified as having a daily living skills deficit. Autism symptomatology, intelligence quotient, maternal education, age, and sex accounted for only 10% of the variance in predicting a daily living skills deficit. Identifying factors associated with better or worse daily living skills may help shed light on the variability in adult outcome in individuals with autism spectrum disorder with average intelligence. © The Author(s) 2013.

  19. Exceptional motifs in different Markov chain models for a statistical analysis of DNA sequences.

    PubMed

    Schbath, S; Prum, B; de Turckheim, E

    1995-01-01

    Identifying exceptional motifs is often used for extracting information from long DNA sequences. The two difficulties of the method are the choice of the model that defines the expected frequencies of words and the approximation of the variance of the difference T(W) between the number of occurrences of a word W and its estimation. We consider here different Markov chain models, either with stationary or periodic transition probabilities. We estimate the variance of the difference T(W) by the conditional variance of the number of occurrences of W given the oligonucleotides counts that define the model. Two applications show how to use asymptotically standard normal statistics associated with the counts to describe a given sequence in terms of its outlying words. Sequences of Escherichia coli and of Bacillus subtilis are compared with respect to their exceptional tri- and tetranucleotides. For both bacteria, exceptional 3-words are mainly found in the coding frame. E. coli palindrome counts are analyzed in different models, showing that many overabundant words are one-letter mutations of avoided palindromes.

  20. Extracting the Evaluations of Stereotypes: Bi-factor Model of the Stereotype Content Structure

    PubMed Central

    Sayans-Jiménez, Pablo; Cuadrado, Isabel; Rojas, Antonio J.; Barrada, Juan R.

    2017-01-01

    Stereotype dimensions—competence, morality and sociability—are fundamental to studying the perception of other groups. These dimensions have shown moderate/high positive correlations with each other that do not reflect the theoretical expectations. The explanation for this (e.g., halo effect) undervalues the utility of the shared variance identified. In contrast, in this work we propose that this common variance could represent the global evaluation of the perceived group. Bi-factor models are proposed to improve the internal structure and to take advantage of the information representing the shared variance among dimensions. Bi-factor models were compared with first order models and other alternative models in three large samples (300–309 participants). The relationships among the global and specific bi-factor dimensions with a global evaluation dimension (measured through a semantic differential) were estimated. The results support the use of bi-factor models rather than first order models (and other alternative models). Bi-factor models also show a greater utility to directly and more easily explore the stereotype content including its evaluative content. PMID:29085313

  1. Using the theory of planned behavior to predict two types of snack food consumption among Midwestern upper elementary children: implications for practice.

    PubMed

    Branscum, Paul; Sharma, Manoj

    This study examined the extent to which constructs of the theory of planned behavior (TPB) can predict the consumption of two types of snack foods among elementary school children. A 15-item instrument tested for validity and reliability measuring TPB constructs was developed and administered to 167 children. Snack foods were evaluated using a modified 24-hour recall method. On average, children consumed 302 calories from snack foods per day. Stepwise multiple regression found that attitudes, subjective norms, and perceived control accounted for 44.7% of the variance for intentions. Concurrently, intentions accounted for 11.3% of the variance for calorically-dense snack food consumption and 8.9% of the variance for fruit and vegetable snack consumption. Results suggest that the theory of planned behavior is an efficacious theory for these two behaviors. Future interventions should consider using this theoretical framework and aim to enhance children's attitudes, perceived control, and subjective norms towards snack food consumption.

  2. Adolescent suicidal ideation.

    PubMed

    Field, T; Diego, M; Sanders, C E

    2001-01-01

    Adolescent suicidal ideation and its relationship to other variables was tapped by a self-report questionnaire administered to 88 high school seniors. Eighteen percent responded positively to the statement "sometimes I feel suicidal." Those who reported suicidal ideation were found to differ from those who did not on a number of variables, including family relationships (quality of relationship with mother, intimacy with parents, and closeness to siblings), family history of depression (maternal depression), peer relations (quality of peer relationships, popularity, and number of friends), emotional well-being (happiness, anger, and depression), drug use (cigarettes, marijuana, and cocaine), and grade point average. Stepwise regression indicated that happiness explained 46% of the variance in suicidal ideation, and number of friends, anger, and marijuana use explained an additional 20%, for a total of 66% of the variance. While 34% of the variance remained unexplained, it is suggested that the questions used to measure these four variables be included in global screenings to identify adolescents at risk for suicidal ideation.

  3. Large amplitude MHD waves upstream of the Jovian bow shock

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.

    1983-01-01

    Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.

  4. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    PubMed

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  5. Comparison of various extraction techniques for the determination of polycyclic aromatic hydrocarbons in worms.

    PubMed

    Mooibroek, D; Hoogerbrugge, R; Stoffelsen, B H G; Dijkman, E; Berkhoff, C J; Hogendoorn, E A

    2002-10-25

    Two less laborious extraction methods, viz. (i) a simplified liquid extraction using light petroleum or (ii) microwave-assisted solvent extraction (MASE), for the analysis of polycyclic aromatic hydrocarbons (PAHs) in samples of the compost worm Eisenia andrei, were compared with a reference method. After extraction and concentration, analytical methodology consisted of a cleanup of (part) of the extract with high-performance gel permeation chromatography (HPGPC) and instrumental analysis of 15 PAHs with reversed-phase liquid chromatography with fluorescence detection (RPLC-FLD). Comparison of the methods was done by analysing samples with incurred residues (n=15, each method) originating from an experiment in which worms were exposed to a soil contaminated with PAHs. Simultaneously, the performance of the total lipid determination of each method was established. Evaluation of the data by means of principal component analysis (PCA) and analysis of variance (ANOVA) revealed that the performance of the light petroleum method for both the extraction of PAHs (concentration range 1-30 ng/g) and lipid content corresponds very well with the reference method. Compared to the reference method, the MASE method yielded somewhat lower concentrations for the less volatile PAHs, e.g., dibenzo[ah]anthracene and benzo[ghi]perylene and provided a significant higher amount of co-extracted material.

  6. Antinociceptive and anticonvulsant activities of hydroalcoholic extract of Jasminum grandiflorum (jasmine) leaves in experimental animals.

    PubMed

    Gupta, Rajesh K; Reddy, Pooja S

    2013-10-01

    Jasminum grandiflorum belongs to the family Oleaceae and is known to have anti-inflammatory, antimicrobial, antioxidant, and antiulcer activities. The present study was undertaken to study its analgesic and anticonvulsant effects in rats and mice. The antinociceptive activity of the hydroalcoholic extract of J. grandiflorum leaves (HEJGL) was studied using tail flick and acetic acid - induced writhing method. Similarly, its anticonvulsant activity was observed by maximal electroshock (MES) method and pentylenetetrazol (PTZ) method. Statistical analysis was performed using one-way analysis of variance (ANOVA) followed by Dunnett's test. At doses of 50, 100, and 200 mg/kg, HEJGL showed significant analgesic and anticonvulsant effects in experimental animals. In view of its analgesic and anticonvulsant activity, the JGL extract can be used in painful conditions as well as in seizure disorders.

  7. Antinociceptive and anticonvulsant activities of hydroalcoholic extract of Jasminum grandiflorum (jasmine) leaves in experimental animals

    PubMed Central

    Gupta, Rajesh K.; Reddy, Pooja S.

    2013-01-01

    Jasminum grandiflorum belongs to the family Oleaceae and is known to have anti-inflammatory, antimicrobial, antioxidant, and antiulcer activities. The present study was undertaken to study its analgesic and anticonvulsant effects in rats and mice. The antinociceptive activity of the hydroalcoholic extract of J. grandiflorum leaves (HEJGL) was studied using tail flick and acetic acid – induced writhing method. Similarly, its anticonvulsant activity was observed by maximal electroshock (MES) method and pentylenetetrazol (PTZ) method. Statistical analysis was performed using one-way analysis of variance (ANOVA) followed by Dunnett's test. At doses of 50, 100, and 200 mg/kg, HEJGL showed significant analgesic and anticonvulsant effects in experimental animals. In view of its analgesic and anticonvulsant activity, the JGL extract can be used in painful conditions as well as in seizure disorders. PMID:24174823

  8. Effect of improper scan alignment on retinal nerve fiber layer thickness measurements using Stratus optical coherence tomograph.

    PubMed

    Vizzeri, Gianmarco; Bowd, Christopher; Medeiros, Felipe A; Weinreb, Robert N; Zangwill, Linda M

    2008-08-01

    Misalignment of the Stratus optical coherence tomograph scan circle placed by the operator around the optic nerve head (ONH) during each retinal nerve fiber layer (RNFL) examination can affect the instrument reproducibility and its theoretical ability to detect true structural changes in the RNFL thickness over time. We evaluated the effect of scan circle placement on RNFL measurements. Observational clinical study. Sixteen eyes of 8 normal participants were examined using the Stratus optical coherence tomograph Fast RNFL thickness acquisition protocol (software version 4.0.7; Carl Zeiss Meditec, Dublin, CA). Four consecutive images were taken by the same operator with the circular scan centered on the optic nerve head. Four images each with the scan displaced superiorly, inferiorly, temporally, and nasally were also acquired. Differences in average and sectoral RNFL thicknesses were determined. For the centered scans, the coefficients of variation (CV) and the intraclass correlation coefficient for the average RNFL thickness measured were calculated. When the average RNFL thickness of the centered scans was compared with the average RNFL thickness of the displaced scans individually using analysis of variance with post-hoc analysis, no difference was found between the average RNFL thickness of the nasally (105.2 microm), superiorly (106.2 microm), or inferiorly (104.1 microm) displaced scans and the centered scans (106.4 microm). However, a significant difference (analysis of variance with Dunnett's test: F=8.82, P<0.0001) was found between temporally displaced scans (115.8 microm) and centered scans. Significant differences in sectoral RNFL thickness measurements were found between centered and each displaced scan. The coefficient of variation for average RNFL thickness was 1.75% and intraclass correlation coefficient was 0.95. In normal eyes, average RNFL thickness measurements are robust and similar with significant superior, inferior, and nasal scan displacement, but average RNFL thickness is greater when scans are displaced temporally. Parapapillary scan misalignment produces significant changes in RNFL assessment characterized by an increase in measured RNFL thickness in the quadrant in which the scan is closer to the disc, and a significant decrease in RNFL thickness in the quadrant in which the scan is displaced further from the optic disc.

  9. Differentiating defects in red oak lumber by discriminant analysis using color, shape, and density

    Treesearch

    B. H. Bond; D. Earl Kline; Philip A. Araman

    2002-01-01

    Defect color, shape, and density measures aid in the differentiation of knots, bark pockets, stain/mineral streak, and clearwood in red oak, (Quercus rubra). Various color, shape, and density measures were extracted for defects present in color and X-ray images captured using a color line scan camera and an X-ray line scan detector. Analysis of variance was used to...

  10. DSM-III Diagnoses Compared with Factor Structure of the Psychopathology Instrument for Mentally Retarded Adults (PIMRA), in an Institutionalized, Mostly Severely Retarded Population.

    ERIC Educational Resources Information Center

    Linaker, Olav

    1991-01-01

    The Psychopathology Instrument for Mentally Retarded Adults was used to diagnose 163 mentally retarded institutionalized adults according to the Diagnostic and Statistical Manual-III axis 1 categories. Nine factors were extracted which contained 49.3 percent of the data variance and categorized correctly 69.3 percent of the cases. Factors included…

  11. The research of road and vehicle information extraction algorithm based on high resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong

    2016-09-01

    With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.

  12. SU-F-T-18: The Importance of Immobilization Devices in Brachytherapy Treatments of Vaginal Cuff

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shojaei, M; Dumitru, N; Pella, S

    2016-06-15

    Purpose: High dose rate brachytherapy is a highly localized radiation therapy that has a very high dose gradient. Thus one of the most important parts of the treatment is the immobilization. The smallest movement of the patient or applicator can result in dose variation to the surrounding tissues as well as to the tumor to be treated. We will revise the ML Cylinder treatments and their localization challenges. Methods: A retrospective study of 25 patients with 5 treatments each looking into the applicator’s placement in regard to the organs at risk. Motion possibilities for each applicator intra and inter fractionationmore » with their dosimetric implications were covered and measured in regard with their dose variance. The localization immobilization devices used were assessed for the capability to prevent motion before and during the treatment delivery. Results: We focused on the 100% isodose on central axis and a 15 degree displacement due to possible rotation analyzing the dose variations to the bladder and rectum walls. The average dose variation for bladder was 15% of the accepted tolerance, with a minimum variance of 11.1% and a maximum one of 23.14% on the central axis. For the off axis measurements we found an average variation of 16.84% of the accepted tolerance, with a minimum variance of 11.47% and a maximum one of 27.69%. For the rectum we focused on the rectum wall closest to the 120% isodose line. The average dose variation was 19.4%, minimum 11.3% and a maximum of 34.02% from the accepted tolerance values Conclusion: Improved immobilization devices are recommended. For inter-fractionation, localization devices are recommended in place with consistent planning in regards with the initial fraction. Many of the present immobilization devices produced for external radiotherapy can be used to improve the localization of HDR applicators during transportation of the patient and during treatment.« less

  13. Decision tree analysis of factors influencing rainfall-related building damage

    NASA Astrophysics Data System (ADS)

    Spekkers, M. H.; Kok, M.; Clemens, F. H. L. R.; ten Veldhuis, J. A. E.

    2014-04-01

    Flood damage prediction models are essential building blocks in flood risk assessments. Little research has been dedicated so far to damage of small-scale urban floods caused by heavy rainfall, while there is a need for reliable damage models for this flood type among insurers and water authorities. The aim of this paper is to investigate a wide range of damage-influencing factors and their relationships with rainfall-related damage, using decision tree analysis. For this, district-aggregated claim data from private property insurance companies in the Netherlands were analysed, for the period of 1998-2011. The databases include claims of water-related damage, for example, damages related to rainwater intrusion through roofs and pluvial flood water entering buildings at ground floor. Response variables being modelled are average claim size and claim frequency, per district per day. The set of predictors include rainfall-related variables derived from weather radar images, topographic variables from a digital terrain model, building-related variables and socioeconomic indicators of households. Analyses were made separately for property and content damage claim data. Results of decision tree analysis show that claim frequency is most strongly associated with maximum hourly rainfall intensity, followed by real estate value, ground floor area, household income, season (property data only), buildings age (property data only), ownership structure (content data only) and fraction of low-rise buildings (content data only). It was not possible to develop statistically acceptable trees for average claim size, which suggest that variability in average claim size is related to explanatory variables that cannot be defined at the district scale. Cross-validation results show that decision trees were able to predict 22-26% of variance in claim frequency, which is considerably better compared to results from global multiple regression models (11-18% of variance explained). Still, a large part of the variance in claim frequency is left unexplained, which is likely to be caused by variations in data at subdistrict scale and missing explanatory variables.

  14. Optimization of microwave-assisted extraction conditions for preparing lignan-rich extract from Saraca asoca bark using Box-Behnken design.

    PubMed

    Mishra, Shikha; Aeri, Vidhu

    2016-07-01

    Lyoniside is the major constituent of Saraca asoca Linn. (Caesalpiniaceae) bark. There is an immediate need to develop an efficient method to isolate its chemical constituents, since it is a therapeutically important plant. A rapid extraction method for lyoniside based on microwave-assisted extraction of S. asoca bark was developed and optimized using response surface methodology (RSM). Lyoniside was analyzed and quantified by high-performance liquid chromatography coupled with ultraviolet detection (HPLC-UV). The extraction solvent ratio (%), material solvent ratio (g/ml) and extraction time (min) were optimized using Box-Behnken design (BBD) to obtain the highest extraction efficiency. The optimal conditions were the use of 1:30 material solvent ratio with 70:30 mixture of methanol:water for 10 min duration. The optimized microwave-assisted extraction yielded 9.4 mg/g of lyoniside content in comparison to reflux extraction under identical conditions which yielded 4.2 mg/g of lyoniside content. Under optimum conditions, the experimental values agreed closely with the predicted values. The analysis of variance (ANOVA) indicated a high goodness-of-fit model and the success of the RSM method for optimizing lyoniside extraction from the bark of S. asoca. All the three variables significantly affected the lyoniside content. Increased polarity of solvent medium enhances the lyoniside yield. The present study shows the applicability of microwave-assisted extraction in extraction of lyoniside from S. asoca bark.

  15. Effects of lipid extraction on nutritive composition of winged bean (Psophocarpus tetragonolobus), rubber seed (Hevea brasiliensis), and tropical almond (Terminalia catappa).

    PubMed

    Jayanegara, Anuraga; Harahap, Rakhmad P; Rozi, Richard F; Nahrowi

    2018-04-01

    This experiment aimed to evaluate the nutritive composition and in vitro rumen fermentability and digestibility of intact and lipid-extracted winged bean, rubber seed, and tropical almond. Soybean, winged bean, rubber seed, and tropical almond were subjected to lipid extraction and chemical composition determination. Lipid extraction was performed through solvent extraction by Soxhlet procedure. Non-extracted and extracted samples of these materials were evaluated for in vitro rumen fermentation and digestibility assay using rumen: Buffer mixture. Parameters measured were gas production kinetics, total volatile fatty acid (VFA) concentration, ammonia, in vitro dry matter (IVDMD) and in vitro organic matter digestibility (IVOMD). Data were analyzed by analysis of variance and Duncan's multiple range test. Soybean, winged bean, rubber seed, and tropical almond contained high amounts of ether extract, i.e., above 20% DM. Crude protein contents of soybean, winged bean, rubber seed, and tropical almond increased by 17.7, 4.7, 55.2, and 126.5% after lipid extraction, respectively. In vitro gas production of intact winged bean was the highest among other materials at various time point intervals (p<0.05), followed by soybean > rubber seed > tropical almond. Extraction of lipid increased in vitro gas production, total VFA concentration, IVDMD, and IVOMD of soybean, winged bean, rubber seed, and tropical almond (p<0.05). After lipid extraction, all feed materials had similar IVDMD and IVOMD values. Lipid extraction improved the nutritional quality of winged bean, rubber seed, and tropical almond.

  16. Semi-Automatic Extraction Algorithm for Images of the Ciliary Muscle

    PubMed Central

    Kao, Chiu-Yen; Richdale, Kathryn; Sinnott, Loraine T.; Ernst, Lauren E.; Bailey, Melissa D.

    2011-01-01

    Purpose To development and evaluate a semi-automatic algorithm for segmentation and morphological assessment of the dimensions of the ciliary muscle in Visante™ Anterior Segment Optical Coherence Tomography images. Methods Geometric distortions in Visante images analyzed as binary files were assessed by imaging an optical flat and human donor tissue. The appropriate pixel/mm conversion factor to use for air (n = 1) was estimated by imaging calibration spheres. A semi-automatic algorithm was developed to extract the dimensions of the ciliary muscle from Visante images. Measurements were also made manually using Visante software calipers. Interclass correlation coefficients (ICC) and Bland-Altman analyses were used to compare the methods. A multilevel model was fitted to estimate the variance of algorithm measurements that was due to differences within- and between-examiners in scleral spur selection versus biological variability. Results The optical flat and the human donor tissue were imaged and appeared without geometric distortions in binary file format. Bland-Altman analyses revealed that caliper measurements tended to underestimate ciliary muscle thickness at 3 mm posterior to the scleral spur in subjects with the thickest ciliary muscles (t = 3.6, p < 0.001). The percent variance due to within- or between-examiner differences in scleral spur selection was found to be small (6%) when compared to the variance due to biological difference across subjects (80%). Using the mean of measurements from three images achieved an estimated ICC of 0.85. Conclusions The semi-automatic algorithm successfully segmented the ciliary muscle for further measurement. Using the algorithm to follow the scleral curvature to locate more posterior measurements is critical to avoid underestimating thickness measurements. This semi-automatic algorithm will allow for repeatable, efficient, and masked ciliary muscle measurements in large datasets. PMID:21169877

  17. The comparison of various approach to evaluation erosion risks and design control erosion measures

    NASA Astrophysics Data System (ADS)

    Kapicka, Jiri

    2015-04-01

    In the present is in the Czech Republic one methodology how to compute and compare erosion risks. This methodology contain also method to design erosion control measures. The base of this methodology is Universal Soil Loss Equation (USLE) and their result long-term average annual rate of erosion (G). This methodology is used for landscape planners. Data and statistics from database of erosion events in the Czech Republic shows that many troubles and damages are from local episodes of erosion events. An extent of these events and theirs impact are conditional to local precipitation events, current plant phase and soil conditions. These erosion events can do troubles and damages on agriculture land, municipally property and hydro components and even in a location is from point of view long-term average annual rate of erosion in good conditions. Other way how to compute and compare erosion risks is episodes approach. In this paper is presented the compare of various approach to compute erosion risks. The comparison was computed to locality from database of erosion events on agricultural land in the Czech Republic where have been records two erosion events. The study area is a simple agriculture land without any barriers that can have high influence to water flow and soil sediment transport. The computation of erosion risks (for all methodology) was based on laboratory analysis of soil samples which was sampled on study area. Results of the methodology USLE, MUSLE and results from mathematical model Erosion 3D have been compared. Variances of the results in space distribution of the places with highest soil erosion where compared and discussed. Other part presents variances of design control erosion measures where their design was done on based different methodology. The results shows variance of computed erosion risks which was done by different methodology. These variances can start discussion about different approach how compute and evaluate erosion risks in areas with different importance.

  18. Clinical excellence: evidence on the assessment of senior doctors' applications to the UK Advisory Committee on Clinical Excellence Awards. Analysis of complete national data set

    PubMed Central

    Campbell, John L; Abel, Gary

    2016-01-01

    Objectives To inform the rational deployment of assessor resource in the evaluation of applications to the UK Advisory Committee on Clinical Excellence Awards (ACCEA). Setting ACCEA are responsible for a scheme to financially reward senior doctors in England and Wales who are assessed to be working over and above the standard expected of their role. Participants Anonymised applications of consultants and senior academic GPs for awards were considered by members of 14 regional subcommittees and 2 national assessing committees during the 2014–2015 round of applications. Design It involved secondary analysis of complete anonymised national data set. Primary and secondary outcome measures We analysed scores for each of 1916 applications for a clinical excellence award across 4 levels of award. Scores were provided by members of 16 subcommittees. We assessed the reliability of assessments and described the variance in the assessment of scores. Results Members of regional subcommittees assessed 1529 new applications and 387 renewal applications. Average scores increased with the level of application being made. On average, applications were assessed by 9.5 assessors. The highest contributions to the variance in individual assessors' assessments of applications were attributable to assessors or to residual variance. The applicant accounted for around a quarter of the variance in scores for new bronze applications, with this proportion decreasing for higher award levels. Reliability in excess of 0.7 can be attained where 4 assessors score bronze applications, with twice as many assessors being required for higher levels of application. Conclusions Assessment processes pertaining in the competitive allocation of public funds need to be credible and efficient. The present arrangements for assessing and scoring applications are defensible, depending on the level of reliability judged to be required in the assessment process. Some relatively minor reconfiguration in approaches to scoring might usefully be considered in future rounds of assessment. PMID:27256095

  19. SU-E-T-56: A Novel Approach to Computing Expected Value and Variance of Point Dose From Non-Gated Radiotherapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S; Zhu, X; Zhang, M

    Purpose: Randomness in patient internal organ motion phase at the beginning of non-gated radiotherapy delivery may introduce uncertainty to dose received by the patient. Concerns of this dose deviation from the planned one has motivated many researchers to study this phenomenon although unified theoretical framework for computing it is still missing. This study was conducted to develop such framework for analyzing the effect. Methods: Two reasonable assumptions were made: a) patient internal organ motion is stationary and periodic; b) no special arrangement is made to start a non -gated radiotherapy delivery at any specific phase of patient internal organ motion.more » A statistical ensemble was formed consisting of patient’s non-gated radiotherapy deliveries at all equally possible initial organ motion phases. To characterize the patient received dose, statistical ensemble average method is employed to derive formulae for two variables: expected value and variance of dose received by a patient internal point from a non-gated radiotherapy delivery. Fourier Series was utilized to facilitate our analysis. Results: According to our formulae, the two variables can be computed from non-gated radiotherapy generated dose rate time sequences at the point’s corresponding locations on fixed phase 3D CT images sampled evenly in time over one patient internal organ motion period. The expected value of point dose is simply the average of the doses to the point’s corresponding locations on the fixed phase CT images. The variance can be determined by time integration in terms of Fourier Series coefficients of the dose rate time sequences on the same fixed phase 3D CT images. Conclusion: Given a non-gated radiotherapy delivery plan and patient’s 4D CT study, our novel approach can predict the expected value and variance of patient radiation dose. We expect it to play a significant role in determining both quality and robustness of patient non-gated radiotherapy plan.« less

  20. Efficiencies for the statistics of size discrimination.

    PubMed

    Solomon, Joshua A; Morgan, Michael; Chubb, Charles

    2011-10-19

    Different laboratories have achieved a consensus regarding how well human observers can estimate the average orientation in a set of N objects. Such estimates are not only limited by visual noise, which perturbs the visual signal of each object's orientation, they are also inefficient: Observers effectively use only √N objects in their estimates (e.g., S. C. Dakin, 2001; J. A. Solomon, 2010). More controversial is the efficiency with which observers can estimate the average size in an array of circles (e.g., D. Ariely, 2001, 2008; S. C. Chong, S. J. Joo, T.-A. Emmanouil, & A. Treisman, 2008; K. Myczek & D. J. Simons, 2008). Of course, there are some important differences between orientation and size; nonetheless, it seemed sensible to compare the two types of estimate against the same ideal observer. Indeed, quantitative evaluation of statistical efficiency requires this sort of comparison (R. A. Fisher, 1925). Our first step was to measure the noise that limits size estimates when only two circles are compared. Our results (Weber fractions between 0.07 and 0.14 were necessary for 84% correct 2AFC performance) are consistent with the visual system adding the same amount of Gaussian noise to all logarithmically transduced circle diameters. We exaggerated this visual noise by randomly varying the diameters in (uncrowded) arrays of 1, 2, 4, and 8 circles and measured its effect on discrimination between mean sizes. Efficiencies inferred from all four observers significantly exceed 25% and, in two cases, approach 100%. More consistent are our measurements of just-noticeable differences in size variance. These latter results suggest between 62 and 75% efficiency for variance discriminations. Although our observers were no more efficient comparing size variances than they were at comparing mean sizes, they were significantly more precise. In other words, our results contain evidence for a non-negligible source of late noise that limits mean discriminations but not variance discriminations.

  1. Second-order statistics of colour codes modulate transformations that effectuate varying degrees of scene invariance and illumination invariance.

    PubMed

    Mausfeld, Rainer; Andres, Johannes

    2002-01-01

    We argue, from an ethology-inspired perspective, that the internal concepts 'surface colours' and 'illumination colours' are part of the data format of two different representational primitives. Thus, the internal concept of 'colour' is not a unitary one but rather refers to two different types of 'data structure', each with its own proprietary types of parameters and relations. The relation of these representational structures is modulated by a class of parameterised transformations whose effects are mirrored in the idealised computational achievements of illumination invariance of colour codes, on the one hand, and scene invariance, on the other hand. Because the same characteristics of a light array reaching the eye can be physically produced in many different ways, the visual system, then, has to make an 'inference' whether a chromatic deviation of the space-averaged colour codes from the neutral point is due to a 'non-normal', ie chromatic, illumination or due to an imbalanced spectral reflectance composition. We provide evidence that the visual system uses second-order statistics of chromatic codes of a single view of a scene in order to modulate corresponding transformations. In our experiments we used centre surround configurations with inhomogeneous surrounds given by a random structure of overlapping circles, referred to as Seurat configurations. Each family of surrounds has a fixed space-average of colour codes, but differs with respect to the covariance matrix of colour codes of pixels that defines the chromatic variance along some chromatic axis and the covariance between luminance and chromatic channels. We found that dominant wavelengths of red-green equilibrium settings of the infield exhibited a stable and strong dependence on the chromatic variance of the surround. High variances resulted in a tendency towards 'scene invariance', low variances in a tendency towards 'illumination invariance' of the infield.

  2. Extraction of fault component from abnormal sound in diesel engines using acoustic signals

    NASA Astrophysics Data System (ADS)

    Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou

    2016-06-01

    In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.

  3. A Stochastic Kinematic Model of Class Averaging in Single-Particle Electron Microscopy

    PubMed Central

    Park, Wooram; Midgett, Charles R.; Madden, Dean R.; Chirikjian, Gregory S.

    2011-01-01

    Single-particle electron microscopy is an experimental technique that is used to determine the 3D structure of biological macromolecules and the complexes that they form. In general, image processing techniques and reconstruction algorithms are applied to micrographs, which are two-dimensional (2D) images taken by electron microscopes. Each of these planar images can be thought of as a projection of the macromolecular structure of interest from an a priori unknown direction. A class is defined as a collection of projection images with a high degree of similarity, presumably resulting from taking projections along similar directions. In practice, micrographs are very noisy and those in each class are aligned and averaged in order to reduce the background noise. Errors in the alignment process are inevitable due to noise in the electron micrographs. This error results in blurry averaged images. In this paper, we investigate how blurring parameters are related to the properties of the background noise in the case when the alignment is achieved by matching the mass centers and the principal axes of the experimental images. We observe that the background noise in micrographs can be treated as Gaussian. Using the mean and variance of the background Gaussian noise, we derive equations for the mean and variance of translational and rotational misalignments in the class averaging process. This defines a Gaussian probability density on the Euclidean motion group of the plane. Our formulation is validated by convolving the derived blurring function representing the stochasticity of the image alignments with the underlying noiseless projection and comparing with the original blurry image. PMID:21660125

  4. Physical Activity and Variation in Momentary Behavioral Cognitions: An Ecological Momentary Assessment Study.

    PubMed

    Pickering, Trevor A; Huh, Jimi; Intille, Stephen; Liao, Yue; Pentz, Mary Ann; Dunton, Genevieve F

    2016-03-01

    Decisions to perform moderate-to-vigorous physical activity (MVPA) involve behavioral cognitive processes that may differ within individuals depending on the situation. Ecological momentary assessment (EMA) was used to examine the relationships of momentary behavioral cognitions (ie, self-efficacy, outcome expectancy, intentions) with MVPA (measured by accelerometer). A sample of 116 adults (mean age, 40.3 years; 72.4% female) provided real-time EMA responses via mobile phones across 4 days. Multilevel models were used to test whether momentary behavioral cognitions differed across contexts and were associated with subsequent MVPA. Mixed-effects location scale models were used to examine whether subject-level means and within-subjects variances in behavioral cognitions were associated with average daily MVPA. Momentary behavioral cognitions differed across contexts for self-efficacy (P = .007) but not for outcome expectancy (P = .53) or intentions (P = .16). Momentary self-efficacy, intentions, and their interaction predicted MVPA within the subsequent 2 hours (Ps < .01). Average daily MVPA was positively associated with within-subjects variance in momentary self-efficacy and intentions for physical activity (Ps < .05). Although momentary behavioral cognitions are related to subsequent MVPA, adults with higher average MVPA have more variation in physical activity self-efficacy and intentions. Performing MVPA may depend more on how much behavioral cognitions vary across the day than whether they are generally high or low.

  5. Optimization of ultrasonic-assisted extraction of bioactive alkaloid compounds from rhizoma coptidis (Coptis chinensis Franch.) using response surface methodology.

    PubMed

    Teng, Hui; Choi, Yong Hee

    2014-01-01

    The optimum extraction conditions for the maximum recovery of total alkaloid content (TAC), berberine content (BC), palmatine content (PC), and the highest antioxidant capacity (AC) from rhizoma coptidis subjected to ultrasonic-assisted extraction (UAE) were determined using response surface methodology (RSM). Central composite design (CCD) with three variables and five levels was employed, and response surface plots were constructed in accordance with a second order polynomial model. Analysis of variance (ANOVA) showed that the quadratic model was well fitted and significant for responses of TAC, BC, PC, and AA. The optimum conditions obtained through the overlapped contour plot were as follows: ethanol concentration of 59%, extraction time of 46.57min, and temperature of 66.22°C. Verification experiment was carried out, and no significant difference was found between observed and estimated values for each response, suggesting that the estimated models were reliable and valid for UAE of alkaloids. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  6. Antimicrobial Effect of Jasminum grandiflorum L. and Hibiscus rosa-sinensis L. Extracts Against Pathogenic Oral Microorganisms--An In Vitro Comparative Study.

    PubMed

    Nagarajappa, Ramesh; Batra, Mehak; Sharda, Archana J; Asawa, Kailash; Sanadhya, Sudhanshu; Daryani, Hemasha; Ramesh, Gayathri

    2015-01-01

    To assess and compare the antimicrobial potential and determine the minimum inhibitory concentration (MIC) of Jasminum grandiflorum and Hibiscus rosa-sinensis extracts as potential anti-pathogenic agents in dental caries. Aqueous and ethanol (cold and hot) extracts prepared from leaves of Jasminum grandiflorum and Hibiscus rosa-sinensis were screened for in vitro antimicrobial activity against Streptococcus mutans and Lactobacillus acidophilus using the agar well diffusion method. The lowest concentration of every extract considered as the minimum inhibitory concentration (MIC) was determined for both test organisms. Statistical analysis was performed with one-way analysis of variance (ANOVA). At lower concentrations, hot ethanol Jasminum grandiflorum (10 μg/ml) and Hibiscus rosa-sinensis (25 μg/ml) extracts were found to have statistically significant (P≤0.05) antimicrobial activity against S. mutans and L. acidophilus with MIC values of 6.25 μg/ml and 25 μg/ml, respectively. A proportional increase in their antimicrobial activity (zone of inhibition) was observed. Both extracts were found to be antimicrobially active and contain compounds with therapeutic potential. Nevertheless, clinical trials on the effect of these plants are essential before advocating large-scale therapy.

  7. BMAA extraction of cyanobacteria samples: which method to choose?

    PubMed

    Lage, Sandra; Burian, Alfred; Rasmussen, Ulla; Costa, Pedro Reis; Annadotter, Heléne; Godhe, Anna; Rydberg, Sara

    2016-01-01

    β-N-Methylamino-L-alanine (BMAA), a neurotoxin reportedly produced by cyanobacteria, diatoms and dinoflagellates, is proposed to be linked to the development of neurological diseases. BMAA has been found in aquatic and terrestrial ecosystems worldwide, both in its phytoplankton producers and in several invertebrate and vertebrate organisms that bioaccumulate it. LC-MS/MS is the most frequently used analytical technique in BMAA research due to its high selectivity, though consensus is lacking as to the best extraction method to apply. This study accordingly surveys the efficiency of three extraction methods regularly used in BMAA research to extract BMAA from cyanobacteria samples. The results obtained provide insights into possible reasons for the BMAA concentration discrepancies in previous publications. In addition and according to the method validation guidelines for analysing cyanotoxins, the TCA protein precipitation method, followed by AQC derivatization and LC-MS/MS analysis, is now validated for extracting protein-bound (after protein hydrolysis) and free BMAA from cyanobacteria matrix. BMAA biological variability was also tested through the extraction of diatom and cyanobacteria species, revealing a high variance in BMAA levels (0.0080-2.5797 μg g(-1) DW).

  8. The Brazilian version of the 20-item rapid estimate of adult literacy in medicine and dentistry

    PubMed Central

    Cruvinel, Agnes Fátima P.; Méndez, Daniela Alejandra C.; Oliveira, Juliana G.; Gutierres, Eliézer; Lotto, Matheus; Machado, Maria Aparecida A.M.; Oliveira, Thaís M.

    2017-01-01

    Background The misunderstanding of specific vocabulary may hamper the patient-health provider communication. The 20-item Rapid Estimate Adult Literacy in Medicine and Dentistry (REALMD-20) was constructed to screen patients by their ability in reading medical/dental terminologies in a simple and rapid way. This study aimed to perform the cross-cultural adaptation and validation of this instrument for its application in Brazilian dental patients. Methods The cross-cultural adaptation was performed through conceptual equivalence, verbatim translation, semantic, item and operational equivalence, and back-translation. After that, 200 participants responded the adapted version of the REALMD-20, the Brazilian version of the Rapid Estimate of Adult Literacy in Dentistry (BREALD-30), ten questions of the Brazilian National Functional Literacy Index (BNFLI), and a questionnaire with socio-demographic and oral health-related questions. Statistical analysis was conducted to assess the reliability and validity of the REALMD-20 (P < 0.05). Results The sample was composed predominantly by women (55.5%) and white/brown (76%) individuals, with an average age of 39.02 years old (±15.28). The average REALMD-20 score was 17.48 (±2.59, range 8–20). It displayed a good internal consistency (Cronbach’s alpha = 0.789) and test-retest reliability (ICC = 0.73; 95% CI [0.66 − 0.79]). In the exploratory factor analysis, six factors were extracted according to Kaiser’s criterion. The factor I (eigenvalue = 4.53) comprised four terms— “Jaundice”, “Amalgam”, “Periodontitis” and “Abscess”—accounted for 25.18% of total variance, while the factor II (eigenvalue = 1.88) comprised other four terms—“Gingivitis”, “Instruction”, “Osteoporosis” and “Constipation”—accounted for 10.46% of total variance. The first four factors accounted for 52.1% of total variance. The REALMD-20 was positively correlated with the BREALD-30 (Rs = 0.73, P < 0.001) and BNFLI (Rs = 0.60, P < 0.001). The scores were significantly higher among health professionals, more educated people, and individuals who reported good/excellent oral health conditions, and who sought preventive dental services. Distinctly, REALMD-20 scores were similar between both participants who visited a dentist <1 year ago and ≥1 year. Also, REALMD-20 was a significant predictor of self-reported oral health status in a multivariate logistic regression model, considering socio-demographic and oral health-related confounding variables. Conclusion The Brazilian version of the REALMD-20 demonstrated adequate psychometric properties for screening dental patients in relation to their recognition of health specific terms. This instrument can contribute to identify individuals with important dental/medical vocabulary limitations in order to improve the health education and outcomes in a person-centered care model. PMID:28875082

  9. Utilizing topobathy LIDAR datasets to identify shoreline variations and to direct charting updates in the northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Gremillion, S. L.; Wright, S. L.

    2017-12-01

    Topographic and bathymetric light detection and ranging (LIDAR), remote sensing tools used to measure vertical elevations, are commonly employed to monitor shoreline fluctuations. Many of these publicly available datasets provide wide-swath, nearshore topobathy which can be used to extract shoreline positions and analyze coastlines experiencing the greatest temporal and spatial variability. This study focused on the shorelines of Mississippi's Jackson County to determine the minimum time for significant positional changes to occur, relative to currently published NOAA navigational charts. Many of these dynamic shorelines are vulnerable to relative sea level rise, storm surge, and coastal erosion. Utilizing LIDAR datasets from 1998-2015, shoreline positions were derived and analyzed against NOAA's Continually Updated Shoreline Product (CUSP) to recommend the frequency at which future surveys should be conducted. Advisement of charting updates were based upon the resolution of published charts, and the magnitude of observed variances. Jackson County shorelines were divided into four areas for analysis; the mainland, Horn Island, Petit Bois Island (PBI), and a dredge spoil area west of PBI. The mainland shoreline experienced an average change rate of +0.57 m/yr during the study period. This stability was due to engineering structures implemented in the early 1920's to protect against tropical storms. Horn Island, the most stable barrier island, changed an average of -1.34 m/yr, while PBI had an average change of -2.70 m/yr throughout. Lastly, the dredge spoil area changed by +9.06 m/yr. Based on these results, it is recommended that LIDAR surveys for Jackson County's mainland be conducted at least every two years, while surveys of the offshore barrier islands be conducted annually. Furthermore, insufficient LIDAR data for Round Island and the Round Island Marsh Restoration Project highlight these two areas as priority targets for future surveys.

  10. Distributional fold change test – a statistical approach for detecting differential expression in microarray experiments

    PubMed Central

    2012-01-01

    Background Because of the large volume of data and the intrinsic variation of data intensity observed in microarray experiments, different statistical methods have been used to systematically extract biological information and to quantify the associated uncertainty. The simplest method to identify differentially expressed genes is to evaluate the ratio of average intensities in two different conditions and consider all genes that differ by more than an arbitrary cut-off value to be differentially expressed. This filtering approach is not a statistical test and there is no associated value that can indicate the level of confidence in the designation of genes as differentially expressed or not differentially expressed. At the same time the fold change by itself provide valuable information and it is important to find unambiguous ways of using this information in expression data treatment. Results A new method of finding differentially expressed genes, called distributional fold change (DFC) test is introduced. The method is based on an analysis of the intensity distribution of all microarray probe sets mapped to a three dimensional feature space composed of average expression level, average difference of gene expression and total variance. The proposed method allows one to rank each feature based on the signal-to-noise ratio and to ascertain for each feature the confidence level and power for being differentially expressed. The performance of the new method was evaluated using the total and partial area under receiver operating curves and tested on 11 data sets from Gene Omnibus Database with independently verified differentially expressed genes and compared with the t-test and shrinkage t-test. Overall the DFC test performed the best – on average it had higher sensitivity and partial AUC and its elevation was most prominent in the low range of differentially expressed features, typical for formalin-fixed paraffin-embedded sample sets. Conclusions The distributional fold change test is an effective method for finding and ranking differentially expressed probesets on microarrays. The application of this test is advantageous to data sets using formalin-fixed paraffin-embedded samples or other systems where degradation effects diminish the applicability of correlation adjusted methods to the whole feature set. PMID:23122055

  11. Detection of nuclear resonance signals: modification of the receiver operating characteristics using feedback.

    PubMed

    Blauch, A J; Schiano, J L; Ginsberg, M D

    2000-06-01

    The performance of a nuclear resonance detection system can be quantified using binary detection theory. Within this framework, signal averaging increases the probability of a correct detection and decreases the probability of a false alarm by reducing the variance of the noise in the average signal. In conjunction with signal averaging, we propose another method based on feedback control concepts that further improves detection performance. By maximizing the nuclear resonance signal amplitude, feedback raises the probability of correct detection. Furthermore, information generated by the feedback algorithm can be used to reduce the probability of false alarm. We discuss the advantages afforded by feedback that cannot be obtained using signal averaging. As an example, we show how this method is applicable to the detection of explosives using nuclear quadrupole resonance. Copyright 2000 Academic Press.

  12. Combined keratoplasty and cataract extraction.

    PubMed

    Demeler, U; Hinzpeter, E N

    1977-04-01

    A short film showing our technique of combined penetrating keratoplasty and intracapsular cataract extraction was shown, and the postoperative results in 72 eyes after an average of 3 years were reported.

  13. A general factor of personality from multitrait-multimethod data and cross-national twins.

    PubMed

    Rushton, J Philippe; Bons, Trudy Ann; Ando, Juko; Hur, Yoon-Mi; Irwing, Paul; Vernon, Philip A; Petrides, K V; Barbaranelli, Claudio

    2009-08-01

    In three studies, a General Factor of Personality (GFP) was found to occupy the apex of the hierarchical structure. In Study 1, a GFP emerged independent of method variance and accounted for 54% of the reliable variance in a multitrait-multimethod assessment of 391 Italian high school students that used self-, teacher-, and parent-ratings on the Big Five Questionnaire - Children. In Study 2, a GFP was found in the seven dimensions of Cloninger's Temperament and Character Inventory as well as the Big Five of the NEO PI-R, with the GFPtci correlating r = .72 with the GFPneo. These results indicate that the GFP is practically the same in both test batteries, and its existence does not depend on being extracted using the Big Five model. The GFP accounted for 22% of the total variance in these trait measures, which were assessed in 651 pairs of 14- to 30-year-old Japanese twins. In Study 3, a GFP accounted for 32% of the total variance in nine scales derived from the NEO PI-R, the Humor Styles Questionnaire, and the Trait Emotional Intelligence Questionnaire assessed in 386 pairs of 18- to 74-year-old Canadian and U.S. twins. The GFP was found to be 50% heritable with high scores indicating openness, conscientiousness, sociability, agreeableness, emotional stability, good humor and emotional intelligence. The possible evolutionary origins of the GFP are discussed.

  14. Identification of regional activation by factorization of high-density surface EMG signals: A comparison of Principal Component Analysis and Non-negative Matrix factorization.

    PubMed

    Gallina, Alessio; Garland, S Jayne; Wakeling, James M

    2018-05-22

    In this study, we investigated whether principal component analysis (PCA) and non-negative matrix factorization (NMF) perform similarly for the identification of regional activation within the human vastus medialis. EMG signals from 64 locations over the VM were collected from twelve participants while performing a low-force isometric knee extension. The envelope of the EMG signal of each channel was calculated by low-pass filtering (8 Hz) the monopolar EMG signal after rectification. The data matrix was factorized using PCA and NMF, and up to 5 factors were considered for each algorithm. Association between explained variance, spatial weights and temporal scores between the two algorithms were compared using Pearson correlation. For both PCA and NMF, a single factor explained approximately 70% of the variance of the signal, while two and three factors explained just over 85% or 90%. The variance explained by PCA and NMF was highly comparable (R > 0.99). Spatial weights and temporal scores extracted with non-negative reconstruction of PCA and NMF were highly associated (all p < 0.001, mean R > 0.97). Regional VM activation can be identified using high-density surface EMG and factorization algorithms. Regional activation explains up to 30% of the variance of the signal, as identified through both PCA and NMF. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    PubMed Central

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  16. [Evoked potentials extraction based on cross-talk resistant adaptive noise cancellation].

    PubMed

    Zeng, Qingning; Li, Ling; Liu, Qinghua; Yao, Dezhong

    2004-06-01

    As Evoked Potentials are much lower in amplitude with respect to the on-going EEC, many trigger-related signals are needed for common averaging technique to enable the extraction of single-trail evoked potentials (EP). How to acquire EP through fewer evocations is an important research project. This paper proposes a cross-talk resistant adaptive noise cancellation method to extract EP. Together with the use of filtering technique and the common averaging technique, the present method needs much less evocations to acquire EP signals. According to the simulating experiment, it needs only several evocations or even only one evocation to get EP signals in good quality.

  17. Dependability of Data Derived from Time Sampling Methods with Multiple Observation Targets

    ERIC Educational Resources Information Center

    Johnson, Austin H.; Chafouleas, Sandra M.; Briesch, Amy M.

    2017-01-01

    In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in…

  18. The Validity of the Academic Rigor Index (ARI) for Predicting FYGPA. Research Report 2012-5

    ERIC Educational Resources Information Center

    Mattern, Krista D.; Wyatt, Jeffrey N.

    2012-01-01

    A recurrent trend in higher education research has been to identify additional predictors of college success beyond the traditional measures of high school grade point average (HSGPA) and standardized test scores, given that a large percentage of unaccounted variance in college performance remains. A recent study by Wyatt, Wiley, Camara, and…

  19. The Effect of Perfectionism and Acculturative Stress on Levels of Depression Experienced by East Asian International Students

    ERIC Educational Resources Information Center

    Hamamura, Toshitaka; Laird, Philip G.

    2014-01-01

    This study examined relationships among acculturative stress, grade point average satisfaction, maladaptive perfectionism, and depression in 52 East Asian international students and 126 North American students. Results indicated that a combined effect of perfectionism and acculturative stress accounted for more than 30% of the variance related to…

  20. Trajectories of Global Self-Esteem Development during Adolescence

    ERIC Educational Resources Information Center

    Birkeland, Marianne Skogbrott; Melkevik, Ole; Holsen, Ingrid; Wold, Bente

    2012-01-01

    Based on data from a 17-year longitudinal study of 1083 adolescents, from the ages of 13 to 30 years, the average development of self-reported global self-esteem was found to be high and stable during adolescence. However, there is considerable inter-individual variance in baseline and development of global self-esteem. This study used latent…

  1. The Effect of Density on the Height-Diameter Relationship

    Treesearch

    Boris Zeide; Curtis Vanderschaaf

    2002-01-01

    Using stand density along with mean diameter to predict average height increases the proportion of explained variance. This result, obtained from permanent plots established in a loblolly pine plantation thinned to different levels, makes sense. We know that due to competition, trees with the same diameter are taller in denser stands. Diameter and density are not only...

  2. Genetic variance and covariance components for feed intake, average daily gain, and postweaning gain in growing beef cattle

    USDA-ARS?s Scientific Manuscript database

    Feed is the single most expensive cost related to a beef cattle production enterprise. Data collection to determine feed efficient animals is also costly. Currently a 70 d performance test is recommended for accurate calculation of efficiency. Previous research has suggested intake tests can be l...

  3. Genetic variance and covariance and breed differences for feed intake and average daily gain to improve feed efficiency in growing cattle

    USDA-ARS?s Scientific Manuscript database

    Feed costs are a major economic expense in finishing and developing cattle; however, collection of feed intake data is costly. Examining relationships among measures of growth and intake, including breed differences, could facilitate selection for efficient cattle. Objectives of this study were to e...

  4. Motivational Correlates of Academic Success in an Educational Psychology Course

    ERIC Educational Resources Information Center

    Herman, William E.

    2011-01-01

    The variables of class attendance and the institution-wide Early Alert Grading System were employed to predict academic success at the end of the semester. Classroom attendance was found to be statistically and significantly related to final average and accounted for 14-16% of the variance in academic performance. Class attendance was found to…

  5. Black-White Differences on IQ and Grades: The Mediating Role of Elementary Cognitive Tasks

    ERIC Educational Resources Information Center

    Pesta, Bryan J.; Poznanski, Peter J.

    2008-01-01

    The relationship between IQ scores and elementary cognitive task (ECT) performance is well established, with variance on each largely reflecting the general factor of intelligence, or g. Also ubiquitous are Black-White mean differences on IQ and measures of academic success, like grade point average (GPA). Given C. Spearman's (Spearman, C. (1927).…

  6. Using Performance Data Gathered at Several Stages of Achievement in Predicting Subsequent Performance.

    ERIC Educational Resources Information Center

    Owen, Steven V.; Feldhusen, John F.

    This study compares the effectiveness of three models of multivariate prediction for academic success in identifying the criterion variance of achievement in nursing education. The first model involves the use of an optimum set of predictors and one equation derived from a regression analysis on first semester grade average in predicting the…

  7. Longitudinal Invariance of the Wechsler Intelligence Scale for Children--Fourth Edition in a Referral Sample

    ERIC Educational Resources Information Center

    Richerson, Lindsay P.; Watkins, Marley W.; Beaujean, A. Alexander

    2014-01-01

    Measurement invariance of the Wechsler Intelligence Scale for Children--Fourth Edition (WISC-IV) was investigated with a group of 352 students eligible for psychoeducational evaluations tested, on average, 2.8 years apart. Configural, metric, and scalar invariance were found. However, the error variance of the Coding subtest was not constant…

  8. 29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... determining the purchaser's average net income after taxes under § 4204.13(a)(1), for any year included in the... financial statements for the specified time period. (d) Limited exemption during pendency of request...) within 30 days after the date on which it receives notice of the plan's decision. (e) Method and date of...

  9. Measuring Teacher Effectiveness through Hierarchical Linear Models: Exploring Predictors of Student Achievement and Truancy

    ERIC Educational Resources Information Center

    Subedi, Bidya Raj; Reese, Nancy; Powell, Randy

    2015-01-01

    This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.…

  10. Structure analysis of simulated molecular clouds with the Δ-variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less

  11. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  12. Biochemical phenotypes to discriminate microbial subpopulations and improve outbreak detection.

    PubMed

    Galar, Alicia; Kulldorff, Martin; Rudnick, Wallis; O'Brien, Thomas F; Stelling, John

    2013-01-01

    Clinical microbiology laboratories worldwide constitute an invaluable resource for monitoring emerging threats and the spread of antimicrobial resistance. We studied the growing number of biochemical tests routinely performed on clinical isolates to explore their value as epidemiological markers. Microbiology laboratory results from January 2009 through December 2011 from a 793-bed hospital stored in WHONET were examined. Variables included patient location, collection date, organism, and 47 biochemical and 17 antimicrobial susceptibility test results reported by Vitek 2. To identify biochemical tests that were particularly valuable (stable with repeat testing, but good variability across the species) or problematic (inconsistent results with repeat testing), three types of variance analyses were performed on isolates of K. pneumonia: descriptive analysis of discordant biochemical results in same-day isolates, an average within-patient variance index, and generalized linear mixed model variance component analysis. 4,200 isolates of K. pneumoniae were identified from 2,485 patients, 32% of whom had multiple isolates. The first two variance analyses highlighted SUCT, TyrA, GlyA, and GGT as "nuisance" biochemicals for which discordant within-patient test results impacted a high proportion of patient results, while dTAG had relatively good within-patient stability with good heterogeneity across the species. Variance component analyses confirmed the relative stability of dTAG, and identified additional biochemicals such as PHOS with a large between patient to within patient variance ratio. A reduced subset of biochemicals improved the robustness of strain definition for carbapenem-resistant K. pneumoniae. Surveillance analyses suggest that the reduced biochemical profile could improve the timeliness and specificity of outbreak detection algorithms. The statistical approaches explored can improve the robust recognition of microbial subpopulations with routinely available biochemical test results, of value in the timely detection of outbreak clones and evolutionarily important genetic events.

  13. Structure analysis of simulated molecular clouds with the Δ-variance

    DOE PAGES

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    2015-05-27

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less

  14. Design, development, and evaluation of a novel retraction device for gallbladder extraction during laparoscopic cholecystectomy.

    PubMed

    Judge, Joshua M; Stukenborg, George J; Johnston, William F; Guilford, William H; Slingluff, Craig L; Hallowell, Peter T

    2014-02-01

    A source of frustration during laparoscopic cholecystectomy involves extraction of the gallbladder through port sites smaller than the gallbladder itself. We describe the development and testing of a novel device for the safe, minimal enlargement of laparoscopic port sites to extract large, stone-filled gallbladders from the abdomen. The study device consists of a handle with a retraction tongue to shield the specimen and a guide for a scalpel to incise the fascia within the incision. Patients enrolled underwent laparoscopic cholecystectomy. Gallbladder extraction was attempted. If standard measures failed, the device was implemented. Extraction time and device utility scores were recorded for each patient. Patients returned 3-4 weeks postoperatively for assessment of pain level, cosmetic effect, and presence of infectious complications. Twenty (51 %) of 39 patients required the device. Average extraction time for the first eight patients was 120 s. After interim analysis, an improved device was used in 12 patients and average extraction time was 24 s. There were no adverse events. Postoperative pain ratings and incision cosmesis were comparable between patients with and without use of the device. The study device enables safe and rapid extraction of impacted gallbladders through the abdominal wall.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmon, S; Jeraj, R; Galavis, P

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less

  16. Extraction of the number of peroxisomes in yeast cells by automated image analysis.

    PubMed

    Niemistö, Antti; Selinummi, Jyrki; Saleem, Ramsey; Shmulevich, Ilya; Aitchison, John; Yli-Harja, Olli

    2006-01-01

    An automated image analysis method for extracting the number of peroxisomes in yeast cells is presented. Two images of the cell population are required for the method: a bright field microscope image from which the yeast cells are detected and the respective fluorescent image from which the number of peroxisomes in each cell is found. The segmentation of the cells is based on clustering the local mean-variance space. The watershed transformation is thereafter employed to separate cells that are clustered together. The peroxisomes are detected by thresholding the fluorescent image. The method is tested with several images of a budding yeast Saccharomyces cerevisiae population, and the results are compared with manually obtained results.

  17. Paper-based tuberculosis diagnostic devices with colorimetric gold nanoparticles

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Ting; Shen, Shu-Wei; Cheng, Chao-Min; Chen, Chien-Fu

    2013-08-01

    A colorimetric sensing strategy employing gold nanoparticles and a paper assay platform has been developed for tuberculosis diagnosis. Unmodified gold nanoparticles and single-stranded detection oligonucleotides are used to achieve rapid diagnosis without complicated and time-consuming thiolated or other surface-modified probe preparation processes. To eliminate the use of sophisticated equipment for data analysis, the color variance for multiple detection results was simultaneously collected and concentrated on cellulose paper with the data readout transmitted for cloud computing via a smartphone. The results show that the 2.6 nM tuberculosis mycobacterium target sequences extracted from patients can easily be detected, and the turnaround time after the human DNA is extracted from clinical samples was approximately 1 h.

  18. Sensitivity of the Hydrogen Epoch of Reionization Array and its build-out stages to one-point statistics from redshifted 21 cm observations

    NASA Astrophysics Data System (ADS)

    Kittiwisit, Piyanat; Bowman, Judd D.; Jacobs, Daniel C.; Beardsley, Adam P.; Thyagarajan, Nithyanandan

    2018-03-01

    We present a baseline sensitivity analysis of the Hydrogen Epoch of Reionization Array (HERA) and its build-out stages to one-point statistics (variance, skewness, and kurtosis) of redshifted 21 cm intensity fluctuation from the Epoch of Reionization (EoR) based on realistic mock observations. By developing a full-sky 21 cm light-cone model, taking into account the proper field of view and frequency bandwidth, utilizing a realistic measurement scheme, and assuming perfect foreground removal, we show that HERA will be able to recover statistics of the sky model with high sensitivity by averaging over measurements from multiple fields. All build-out stages will be able to detect variance, while skewness and kurtosis should be detectable for HERA128 and larger. We identify sample variance as the limiting constraint of the measurements at the end of reionization. The sensitivity can also be further improved by performing frequency windowing. In addition, we find that strong sample variance fluctuation in the kurtosis measured from an individual field of observation indicates the presence of outlying cold or hot regions in the underlying fluctuations, a feature that can potentially be used as an EoR bubble indicator.

  19. Does education confer a culture of healthy behavior? Smoking and drinking patterns in Danish twins.

    PubMed

    Johnson, Wendy; Kyvik, Kirsten Ohm; Mortensen, Erik L; Skytthe, Axel; Batty, G David; Deary, Ian J

    2011-01-01

    More education is associated with healthier smoking and drinking behaviors. Most analyses of effects of education focus on mean levels. Few studies have compared variance in health-related behaviors at different levels of education or analyzed how education impacts underlying genetic and environmental sources of health-related behaviors. This study explored these influences. In a 2002 postal questionnaire, 21,522 members of the Danish Twin Registry, born during 1931-1982, reported smoking and drinking habits. The authors used quantitative genetic models to examine how these behaviors' genetic and environmental variances differed with level of education, adjusting for birth-year effects. As expected, more education was associated with less smoking, and average drinking levels were highest among the most educated. At 2 standard deviations above the mean educational level, variance in smoking and drinking was about one-third that among those at 2 standard deviations below, because fewer highly educated people reported high levels of smoking or drinking. Because shared environmental variance was particularly restricted, one explanation is that education created a culture that discouraged smoking and heavy drinking. Correlations between shared environmental influences on education and the health behaviors were substantial among the well-educated for smoking in both sexes and drinking in males, reinforcing this notion.

  20. The magnitude and colour of noise in genetic negative feedback systems

    PubMed Central

    Voliotis, Margaritis; Bowsher, Clive G.

    2012-01-01

    The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or ‘noise’ in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier—for transcriptional autorepression, it is frequently negligible. PMID:22581772

  1. Estimating synaptic parameters from mean, variance, and covariance in trains of synaptic responses.

    PubMed

    Scheuss, V; Neher, E

    2001-10-01

    Fluctuation analysis of synaptic transmission using the variance-mean approach has been restricted in the past to steady-state responses. Here we extend this method to short repetitive trains of synaptic responses, during which the response amplitudes are not stationary. We consider intervals between trains, long enough so that the system is in the same average state at the beginning of each train. This allows analysis of ensemble means and variances for each response in a train separately. Thus, modifications in synaptic efficacy during short-term plasticity can be attributed to changes in synaptic parameters. In addition, we provide practical guidelines for the analysis of the covariance between successive responses in trains. Explicit algorithms to estimate synaptic parameters are derived and tested by Monte Carlo simulations on the basis of a binomial model of synaptic transmission, allowing for quantal variability, heterogeneity in the release probability, and postsynaptic receptor saturation and desensitization. We find that the combined analysis of variance and covariance is advantageous in yielding an estimate for the number of release sites, which is independent of heterogeneity in the release probability under certain conditions. Furthermore, it allows one to calculate the apparent quantal size for each response in a sequence of stimuli.

  2. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  3. Estimation of internal organ motion-induced variance in radiation dose in non-gated radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhou, Sumin; Zhu, Xiaofeng; Zhang, Mutian; Zheng, Dandan; Lei, Yu; Li, Sicong; Bennion, Nathan; Verma, Vivek; Zhen, Weining; Enke, Charles

    2016-12-01

    In the delivery of non-gated radiotherapy (RT), owing to intra-fraction organ motion, a certain degree of RT dose uncertainty is present. Herein, we propose a novel mathematical algorithm to estimate the mean and variance of RT dose that is delivered without gating. These parameters are specific to individual internal organ motion, dependent on individual treatment plans, and relevant to the RT delivery process. This algorithm uses images from a patient’s 4D simulation study to model the actual patient internal organ motion during RT delivery. All necessary dose rate calculations are performed in fixed patient internal organ motion states. The analytical and deterministic formulae of mean and variance in dose from non-gated RT were derived directly via statistical averaging of the calculated dose rate over possible random internal organ motion initial phases, and did not require constructing relevant histograms. All results are expressed in dose rate Fourier transform coefficients for computational efficiency. Exact solutions are provided to simplified, yet still clinically relevant, cases. Results from a volumetric-modulated arc therapy (VMAT) patient case are also presented. The results obtained from our mathematical algorithm can aid clinical decisions by providing information regarding both mean and variance of radiation dose to non-gated patients prior to RT delivery.

  4. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  5. Comparison of antimalarial activity of Artemisia turanica extract with current drugs in vivo.

    PubMed

    Taherkhani, Mahboubeh; Rustaiyan, Abdolhossein; Nahrevanian, Hossein; Naeimi, Sabah; Taherkhani, Tofigh

    2013-03-01

    The purpose of this study was to compare antimalarial activity of Artemisia turanica Krasch as Iranian flora with current antimalarial drugs against Plasmodium berghei in vivo in mice. Air-dried aerial parts of Iranian flora A. turanica were collected from Khorasan, northeastern Iran, extracted with Et2O/MeOH/Petrol and defatted. Toxicity of herbal extracts was assessed on male NMRI mice, and their antimalarial efficacy was compared with antimalarial drugs [artemether, chloroquine and sulfadoxinepyrimethamine (Fansidar)] on infected P. berghei animals. All the groups were investigated for parasitaemia, body weight, hepatomegaly, splenomegaly and anemia. The significance of differences was determined by Analysis of Variances (ANOVA) and Student's t-test using Graph Pad Prism software. The inhibitory effects of A. turanica extract on early decline of P. berghei parasitaemia highlights its antimalarial activity, however, this effect no longer can be observed in the late infection. This may be due to the metabolic process of A. turanica crude extract by mice and reduction of its concentration in the body. Crude extract of A. turanica represented its antisymptomatic effects by stabilization of body, liver and spleen weights. This study confirmed antimalarial effects of A. turanica extracts against murine malaria in vivo during early infection, however, there are more benefits on pathophysiological symptoms by this medication.

  6. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  7. Estimating time-varying conditional correlations between stock and foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Tastan, Hüseyin

    2006-02-01

    This study explores the dynamic interaction between stock market returns and changes in nominal exchange rates. Many financial variables are known to exhibit fat tails and autoregressive variance structure. It is well-known that unconditional covariance and correlation coefficients also vary significantly over time and multivariate generalized autoregressive model (MGARCH) is able to capture the time-varying variance-covariance matrix for stock market returns and changes in exchange rates. The model is applied to daily Euro-Dollar exchange rates and two stock market indexes from the US economy: Dow-Jones Industrial Average Index and S&P500 Index. The news impact surfaces are also drawn based on the model estimates to see the effects of idiosyncratic shocks in respective markets.

  8. Angle-of-arrival variance of waves and rays in strong atmospheric scattering: split-step simulation results

    NASA Astrophysics Data System (ADS)

    Voelz, David; Wijerathna, Erandi; Xiao, Xifeng; Muschinski, Andreas

    2017-09-01

    The analysis of optical propagation through both deterministic and stochastic refractive-index fields may be substantially simplified if diffraction effects can be neglected. With regard to simplification, it is known that certain geometricaloptics predictions often agree well with field observations but it is not always clear why this is so. Here, a new investigation of this issue is presented involving wave optics and geometrical (ray) optics computer simulations of a beam of visible light propagating through fully turbulent, homogeneous and isotropic refractive-index fields. We compare the computationally simulated, aperture-averaged angle-of-arrival variances (for aperture diameters ranging from 0.5 to 13 Fresnel lengths) with theoretical predictions based on the Rytov theory.

  9. In vivo recovery of factor VIII and factor IX: intra- and interindividual variance in a clinical setting.

    PubMed

    Björkman, S; Folkesson, A; Berntorp, E

    2007-01-01

    In vivo recovery (IVR) is traditionally used as a parameter to characterize the pharmacokinetic properties of coagulation factors. It has also been suggested that dosing of factor VIII (FVIII) and factor IX (FIX) can be adjusted according to the need of the individual patient, based on an individually determined IVR value. This approach, however, requires that the individual IVR value is more reliably representative for the patient than the mean value in the population, i.e. that there is less variance within than between the individuals. The aim of this investigation was to compare intra- and interindividual variance in IVR (as U dL1 per U kg1) for FVIII and plasma-derived FIX in a cohort of non-bleeding patients with haemophilia. The data were collected retrospectively from six clinical studies, yielding 297 IVR determinations in 50 patients with haemophilia A and 93 determinations in 13 patients with haemophilia B. For FVIII, the mean variance within patients exceeded the between-patient variance. Thus, an individually determined IVR value is apparently no more informative than an average, or population, value for the dosing of FVIII. There was no apparent relationship between IVR and age of the patient (1.5-67 years). For FIX, the mean variance within patients was lower than the between-patient variance, and there was a significant positive relationship between IVR and age (13-69 years). From these data, it seems probable that using an individual IVR confers little advantage in comparison to using an age-specific population mean value. Dose tailoring of coagulation factor treatment has been applied successfully after determination of the entire single-dose curve of FVIII:C or FIX:C in the patient and calculation of the relevant pharmacokinetic parameters. However, the findings presented here do not support the assumption that dosing of FVIII or FIX can be individualized on the basis of a clinically determined IVR value.

  10. THE LONGEST TIMESCALE X-RAY VARIABILITY REVEALS EVIDENCE FOR ACTIVE GALACTIC NUCLEI IN THE HIGH ACCRETION STATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Youhong, E-mail: youhong.zhang@mail.tsinghua.edu.cn

    2011-01-01

    The All Sky Monitor (ASM) on board the Rossi X-ray Timing Explorer has continuously monitored a number of active galactic nuclei (AGNs) with similar sampling rates for 14 years, from 1996 January to 2009 December. Utilizing the archival ASM data of 27 AGNs, we calculate the normalized excess variances of the 300-day binned X-ray light curves on the longest timescale (between 300 days and 14 years) explored so far. The observed variance appears to be independent of AGN black-hole mass and bolometric luminosity. According to the scaling relation of black-hole mass (and bolometric luminosity) from galactic black hole X-ray binariesmore » (GBHs) to AGNs, the break timescales that correspond to the break frequencies detected in the power spectral density (PSD) of our AGNs are larger than the binsize (300 days) of the ASM light curves. As a result, the singly broken power-law (soft-state) PSD predicts the variance to be independent of mass and luminosity. Nevertheless, the doubly broken power-law (hard-state) PSD predicts, with the widely accepted ratio of the two break frequencies, that the variance increases with increasing mass and decreases with increasing luminosity. Therefore, the independence of the observed variance on mass and luminosity suggests that AGNs should have soft-state PSDs. Taking into account the scaling of the break timescale with mass and luminosity synchronously, the observed variances are also more consistent with the soft-state than the hard-state PSD predictions. With the averaged variance of AGNs and the soft-state PSD assumption, we obtain a universal PSD amplitude of 0.030 {+-} 0.022. By analogy with the GBH PSDs in the high/soft state, the longest timescale variability supports the standpoint that AGNs are scaled-up GBHs in the high accretion state, as already implied by the direct PSD analysis.« less

  11. Hierarchical Bayesian modeling of heterogeneous variances in average daily weight gain of commercial feedlot cattle.

    PubMed

    Cernicchiaro, N; Renter, D G; Xiang, S; White, B J; Bello, N M

    2013-06-01

    Variability in ADG of feedlot cattle can affect profits, thus making overall returns more unstable. Hence, knowledge of the factors that contribute to heterogeneity of variances in animal performance can help feedlot managers evaluate risks and minimize profit volatility when making managerial and economic decisions in commercial feedlots. The objectives of the present study were to evaluate heteroskedasticity, defined as heterogeneity of variances, in ADG of cohorts of commercial feedlot cattle, and to identify cattle demographic factors at feedlot arrival as potential sources of variance heterogeneity, accounting for cohort- and feedlot-level information in the data structure. An operational dataset compiled from 24,050 cohorts from 25 U. S. commercial feedlots in 2005 and 2006 was used for this study. Inference was based on a hierarchical Bayesian model implemented with Markov chain Monte Carlo, whereby cohorts were modeled at the residual level and feedlot-year clusters were modeled as random effects. Forward model selection based on deviance information criteria was used to screen potentially important explanatory variables for heteroskedasticity at cohort- and feedlot-year levels. The Bayesian modeling framework was preferred as it naturally accommodates the inherently hierarchical structure of feedlot data whereby cohorts are nested within feedlot-year clusters. Evidence for heterogeneity of variance components of ADG was substantial and primarily concentrated at the cohort level. Feedlot-year specific effects were, by far, the greatest contributors to ADG heteroskedasticity among cohorts, with an estimated ∼12-fold change in dispersion between most and least extreme feedlot-year clusters. In addition, identifiable demographic factors associated with greater heterogeneity of cohort-level variance included smaller cohort sizes, fewer days on feed, and greater arrival BW, as well as feedlot arrival during summer months. These results support that heterogeneity of variances in ADG is prevalent in feedlot performance and indicate potential sources of heteroskedasticity. Further investigation of factors associated with heteroskedasticity in feedlot performance is warranted to increase consistency and uniformity in commercial beef cattle production and subsequent profitability.

  12. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  13. Comparison between 1-minute and 15-minute averages of turbulence parameters

    NASA Technical Reports Server (NTRS)

    Noble, John M.

    1993-01-01

    Sonic anemometers are good instruments for measuring temperature and wind speed and are fast enough to calculate the temperature and wind structure parameters used to calculate the variance in the acoustic index of refraction. However, the turbulence parameters are typically 15-minute averaged point measurements. There are several problems associated with making point measurements and using them to represent a turbulence field. Some of the sonic anemometer data analyzed from the Joint Acoustic Propagation Experiment (JAPE) conducted during July 1991 at DIRT Site located at White Sands Missile Range, New Mexico, are examined.

  14. Comparison of ASE and SFE with Soxhlet, Sonication, and Methanolic Saponification Extractions for the Determination of Organic Micropollutants in Marine Particulate Matter.

    PubMed

    Heemken, O P; Theobald, N; Wenclawiak, B W

    1997-06-01

    The methods of accelerated solvent extraction (ASE) and supercritical fluid extraction (SFE) of polycyclic aromatic hydrocarbons (PAHs), aliphatic hydrocarbons, and chlorinated hydrocarbons from marine samples were investigated. The results of extractions of a certified sediment and four samples of suspended particulate matter (SPM) were compared to classical Soxhlet (SOX), ultrasonication (USE), and methanolic saponification extraction (MSE) methods. The recovery data, including precision and systematic deviations of each method, were evaluated statistically. It was found that recoveries and precision of ASE and SFE compared well with the other methods investigated. Using SFE, the average recoveries of PAHs in three different samples ranged from 96 to 105%, for ASE the recoveries were in the range of 97-108% compared to the reference methods. Compared to the certified values of sediment HS-6, the average recoveries of SFE and ASE were 87 and 88%, most compounds being within the limits of confidence. Also, for alkanes the average recoveries by SFE and ASE were equal to the results obtained by SOX, USE, and MSE. In the case of SFE, the recoveries were in the range 93-115%, and ASE achieved recoveries of 94-107% as compared to the other methods. For ASE and SFE, the influence of water on the extraction efficiency was examined. While the natural water content of the SPM sample (56 wt %) led to insufficient recoveries in ASE and SFE, quantitative extractions were achieved in SFE after addition of anhydrous sodium sulfate to the sample. Finally, ASE was applied to SPM-loaded filter candles whereby a mixture of n-hexane/acetone as extraction solvent allowed the simultaneous determination of PAHs, alkanes, and chlorinated hydrocarbons.

  15. Effect of electron beam cooling on transversal and longitudinal emittance of an external proton beam

    NASA Astrophysics Data System (ADS)

    Kilian, K.; Machner, H.; Magiera, A.; Prasuhn, D.; von Rossen, P.; Siudak, R.; Stein, H. J.; Stockhorst, H.

    2018-02-01

    Benefits of electron cooling to the quality of extracted ion beams from storage rings are discussed. The transversal emittances of an external proton beam with and without electron cooling at injection energy are measured with the GEM detector assembly. While the horizontal emittance remains the vertical emittance shrinks by the cooling process. The longitudinal momentum variance is also reduced by cooling.

  16. Lipid emulsion improves survival in animal models of local anesthetic toxicity: a meta-analysis.

    PubMed

    Fettiplace, Michael R; McCabe, Daniel J

    2017-08-01

    The Lipid Emulsion Therapy workgroup, organized by the American Academy of Clinical Toxicology, recently conducted a systematic review, which subjectively evaluated lipid emulsion as a treatment for local anesthetic toxicity. We re-extracted data and conducted a meta-analysis of survival in animal models. We extracted survival data from 26 publications and conducted a random-effect meta-analysis based on odds ratio weighted by inverse variance. We assessed the benefit of lipid emulsion as an independent variable in resuscitative models (16 studies). We measured Cochran's Q for heterogeneity and I 2 to determine variance contributed by heterogeneity. Finally, we conducted a funnel plot analysis and Egger's test to assess for publication bias in studies. Lipid emulsion reduced the odds of death in resuscitative models (OR =0.24; 95%CI: 0.1-0.56, p = .0012). Heterogeneity analysis indicated a homogenous distribution. Funnel plot analysis did not indicate publication bias in experimental models. Meta-analysis of animal data supports the use of lipid emulsion (in combination with other resuscitative measures) for the treatment of local anesthetic toxicity, specifically from bupivacaine. Our conclusion differed from the original review. Analysis of outliers reinforced the need for good life support measures (securement of airway and chest compressions) along with prompt treatment with lipid.

  17. Assessing the Structure of the Ways of Coping Questionnaire in Fibromyalgia Patients Using Common Factor Analytic Approaches.

    PubMed

    Van Liew, Charles; Santoro, Maya S; Edwards, Larissa; Kang, Jeremy; Cronan, Terry A

    2016-01-01

    The Ways of Coping Questionnaire (WCQ) is a widely used measure of coping processes. Despite its use in a variety of populations, there has been concern about the stability and structure of the WCQ across different populations. This study examines the factor structure of the WCQ in a large sample of individuals diagnosed with fibromyalgia. The participants were 501 adults (478 women) who were part of a larger intervention study. Participants completed the WCQ at their 6-month assessment. Foundational factoring approaches were performed on the data (i.e., maximum likelihood factoring [MLF], iterative principal factoring [IPF], principal axis factoring (PAF), and principal components factoring [PCF]) with oblique oblimin rotation. Various criteria were evaluated to determine the number of factors to be extracted, including Kaiser's rule, Scree plot visual analysis, 5 and 10% unique variance explained, 70 and 80% communal variance explained, and Horn's parallel analysis (PA). It was concluded that the 4-factor PAF solution was the preferable solution, based on PA extraction and the fact that this solution minimizes nonvocality and multivocality. The present study highlights the need for more research focused on defining the limits of the WCQ and the degree to which population-specific and context-specific subscale adjustments are needed.

  18. Assessing the Structure of the Ways of Coping Questionnaire in Fibromyalgia Patients Using Common Factor Analytic Approaches

    PubMed Central

    Edwards, Larissa; Kang, Jeremy

    2016-01-01

    The Ways of Coping Questionnaire (WCQ) is a widely used measure of coping processes. Despite its use in a variety of populations, there has been concern about the stability and structure of the WCQ across different populations. This study examines the factor structure of the WCQ in a large sample of individuals diagnosed with fibromyalgia. The participants were 501 adults (478 women) who were part of a larger intervention study. Participants completed the WCQ at their 6-month assessment. Foundational factoring approaches were performed on the data (i.e., maximum likelihood factoring [MLF], iterative principal factoring [IPF], principal axis factoring (PAF), and principal components factoring [PCF]) with oblique oblimin rotation. Various criteria were evaluated to determine the number of factors to be extracted, including Kaiser's rule, Scree plot visual analysis, 5 and 10% unique variance explained, 70 and 80% communal variance explained, and Horn's parallel analysis (PA). It was concluded that the 4-factor PAF solution was the preferable solution, based on PA extraction and the fact that this solution minimizes nonvocality and multivocality. The present study highlights the need for more research focused on defining the limits of the WCQ and the degree to which population-specific and context-specific subscale adjustments are needed. PMID:28070160

  19. Classification of radiological errors in chest radiographs, using support vector machine on the spatial frequency features of false- negative and false-positive regions

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Donovan, Tim; Brennan, Patrick C.; Dix, Alan; Manning, David J.

    2011-03-01

    Aim: To optimize automated classification of radiological errors during lung nodule detection from chest radiographs (CxR) using a support vector machine (SVM) run on the spatial frequency features extracted from the local background of selected regions. Background: The majority of the unreported pulmonary nodules are visually detected but not recognized; shown by the prolonged dwell time values at false-negative regions. Similarly, overestimated nodule locations are capturing substantial amounts of foveal attention. Spatial frequency properties of selected local backgrounds are correlated with human observer responses either in terms of accuracy in indicating abnormality position or in the precision of visual sampling the medical images. Methods: Seven radiologists participated in the eye tracking experiments conducted under conditions of pulmonary nodule detection from a set of 20 postero-anterior CxR. The most dwelled locations have been identified and subjected to spatial frequency (SF) analysis. The image-based features of selected ROI were extracted with un-decimated Wavelet Packet Transform. An analysis of variance was run to select SF features and a SVM schema was implemented to classify False-Negative and False-Positive from all ROI. Results: A relative high overall accuracy was obtained for each individually developed Wavelet-SVM algorithm, with over 90% average correct ratio for errors recognition from all prolonged dwell locations. Conclusion: The preliminary results show that combined eye-tracking and image-based features can be used for automated detection of radiological error with SVM. The work is still in progress and not all analytical procedures have been completed, which might have an effect on the specificity of the algorithm.

  20. An automated approach towards detecting complex behaviours in deep brain oscillations.

    PubMed

    Mace, Michael; Yousif, Nada; Naushahi, Mohammad; Abdullah-Al-Mamun, Khondaker; Wang, Shouyan; Nandi, Dipankar; Vaidyanathan, Ravi

    2014-03-15

    Extracting event-related potentials (ERPs) from neurological rhythms is of fundamental importance in neuroscience research. Standard ERP techniques typically require the associated ERP waveform to have low variance, be shape and latency invariant and require many repeated trials. Additionally, the non-ERP part of the signal needs to be sampled from an uncorrelated Gaussian process. This limits methods of analysis to quantifying simple behaviours and movements only when multi-trial data-sets are available. We introduce a method for automatically detecting events associated with complex or large-scale behaviours, where the ERP need not conform to the aforementioned requirements. The algorithm is based on the calculation of a detection contour and adaptive threshold. These are combined using logical operations to produce a binary signal indicating the presence (or absence) of an event with the associated detection parameters tuned using a multi-objective genetic algorithm. To validate the proposed methodology, deep brain signals were recorded from implanted electrodes in patients with Parkinson's disease as they participated in a large movement-based behavioural paradigm. The experiment involved bilateral recordings of local field potentials from the sub-thalamic nucleus (STN) and pedunculopontine nucleus (PPN) during an orientation task. After tuning, the algorithm is able to extract events achieving training set sensitivities and specificities of [87.5 ± 6.5, 76.7 ± 12.8, 90.0 ± 4.1] and [92.6 ± 6.3, 86.0 ± 9.0, 29.8 ± 12.3] (mean ± 1 std) for the three subjects, averaged across the four neural sites. Furthermore, the methodology has the potential for utility in real-time applications as only a single-trial ERP is required. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Geomagnetic field model for the last 5 My: time-averaged field and secular variation

    NASA Astrophysics Data System (ADS)

    Hatakeyama, Tadahiro; Kono, Masaru

    2002-11-01

    Structure of the geomagnetic field has bee studied by using the paleomagetic direction data of the last 5 million years obtained from lava flows. The method we used is the nonlinear version, similar to the works of Gubbins and Kelly [Nature 365 (1993) 829], Johnson and Constable [Geophys. J. Int. 122 (1995) 488; Geophys. J. Int. 131 (1997) 643], and Kelly and Gubbins [Geophys. J. Int. 128 (1997) 315], but we determined the time-averaged field (TAF) and the paleosecular variation (PSV) simultaneously. As pointed out in our previous work [Earth Planet. Space 53 (2001) 31], the observed mean field directions are affected by the fluctuation of the field, as described by the PSV model. This effect is not excessively large, but cannot be neglected while considering the mean field. We propose that the new TAF+PSV model is a better representation of the ancient magnetic field, since both the average and fluctuation of the field are consistently explained. In the inversion procedure, we used direction cosines instead of inclinations and declinations, as the latter quantities show singularity or unstable behavior at the high latitudes. The obtained model gives reasonably good fit to the observed means and variances of direction cosines. In the TAF model, the geocentric axial dipole term ( g10) is the dominant component; it is much more pronounced than that in the present magnetic field. The equatorial dipole component is quite small, after averaging over time. The model shows a very smooth spatial variation; the nondipole components also seem to be averaged out quite effectively over time. Among the other coefficients, the geocentric axial quadrupole term ( g20) is significantly larger than the other components. On the other hand, the axial octupole term ( g30) is much smaller than that in a TAF model excluding the PSV effect. It is likely that the effect of PSV is most clearly seen in this term, which is consistent with the conclusion reached in our previous work. The PSV model shows large variance of the (2,1) component, which is in good agreement with the previous PSV models obtained by forward approaches. It is also indicated that the variance of the axial dipole term is very small. This is in conflict with the studies based on paleointensity data, but we show that this conclusion is not inconsistent with the paleointensity data because a substantial part of the apparent scatter in paleointensities may be attributable to effects other than the fluctuations in g10 itself.

  2. Mass fluctuation kinetics: Capturing stochastic effects in systems of chemical reactions through coupled mean-variance computations

    NASA Astrophysics Data System (ADS)

    Gómez-Uribe, Carlos A.; Verghese, George C.

    2007-01-01

    The intrinsic stochastic effects in chemical reactions, and particularly in biochemical networks, may result in behaviors significantly different from those predicted by deterministic mass action kinetics (MAK). Analyzing stochastic effects, however, is often computationally taxing and complex. The authors describe here the derivation and application of what they term the mass fluctuation kinetics (MFK), a set of deterministic equations to track the means, variances, and covariances of the concentrations of the chemical species in the system. These equations are obtained by approximating the dynamics of the first and second moments of the chemical master equation. Apart from needing knowledge of the system volume, the MFK description requires only the same information used to specify the MAK model, and is not significantly harder to write down or apply. When the effects of fluctuations are negligible, the MFK description typically reduces to MAK. The MFK equations are capable of describing the average behavior of the network substantially better than MAK, because they incorporate the effects of fluctuations on the evolution of the means. They also account for the effects of the means on the evolution of the variances and covariances, to produce quite accurate uncertainty bands around the average behavior. The MFK computations, although approximate, are significantly faster than Monte Carlo methods for computing first and second moments in systems of chemical reactions. They may therefore be used, perhaps along with a few Monte Carlo simulations of sample state trajectories, to efficiently provide a detailed picture of the behavior of a chemical system.

  3. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  4. How many drinks did you have on September 11, 2001?

    PubMed

    Perrine, M W Bud; Schroder, Kerstin E E

    2005-07-01

    This study tested the predictability of error in retrospective self-reports of alcohol consumption on September 11, 2001, among 80 Vermont light, medium and heavy drinkers. Subjects were 52 men and 28 women participating in daily self-reports of alcohol consumption for a total of 2 years, collected via interactive voice response technology (IVR). In addition, retrospective self-reports of alcohol consumption on September 11, 2001, were collected by telephone interview 4-5 days following the terrorist attacks. Retrospective error was calculated as the difference between the IVR self-report of drinking behavior on September 11 and the retrospective self-report collected by telephone interview. Retrospective error was analyzed as a function of gender and baseline drinking behavior during the 365 days preceding September 11, 2001 (termed "the baseline"). The intraclass correlation (ICC) between daily IVR and retrospective self-reports of alcohol consumption on September 11 was .80. Women provided, on average, more accurate self-reports (ICC = .96) than men (ICC = .72) but displayed more underreporting bias in retrospective responses. Amount and individual variability of alcohol consumption during the 1-year baseline explained, on average, 11% of the variance in overreporting (r = .33), 9% of the variance in underreporting (r = .30) and 25% of the variance in the overall magnitude of error (r = .50), with correlations up to .62 (r2 = .38). The size and direction of error were clearly predictable from the amount and variation in drinking behavior during the 1-year baseline period. The results demonstrate the utility and detail of information that can be derived from daily IVR self-reports in the analysis of retrospective error.

  5. Differences in head impulse test results due to analysis techniques.

    PubMed

    Cleworth, Taylor W; Carpenter, Mark G; Honegger, Flurin; Allum, John H J

    2017-01-01

    Different analysis techniques are used to define vestibulo-ocular reflex (VOR) gain between eye and head angular velocity during the video head impulse test (vHIT). Comparisons would aid selection of gain techniques best related to head impulse characteristics and promote standardisation. Compare and contrast known methods of calculating vHIT VOR gain. We examined lateral canal vHIT responses recorded from 20 patients twice within 13 weeks of acute unilateral peripheral vestibular deficit onset. Ten patients were tested with an ICS Impulse system (GN Otometrics) and 10 with an EyeSeeCam (ESC) system (Interacoustics). Mean gain and variance were computed with area, average sample gain, and regression techniques over specific head angular velocity (HV) and acceleration (HA) intervals. Results for the same gain technique were not different between measurement systems. Area and average sample gain yielded equally lower variances than regression techniques. Gains computed over the whole impulse duration were larger than those computed for increasing HV. Gain over decreasing HV was associated with larger variances. Gains computed around peak HV were smaller than those computed around peak HA. The median gain over 50-70 ms was not different from gain around peak HV. However, depending on technique used, the gain over increasing HV was different from gain around peak HA. Conversion equations between gains obtained with standard ICS and ESC methods were computed. For low gains, the conversion was dominated by a constant that needed to be added to ESC gains to equal ICS gains. We recommend manufacturers standardize vHIT gain calculations using 2 techniques: area gain around peak HA and peak HV.

  6. New Observations of Subarcsecond Photospheric Bright Points

    NASA Technical Reports Server (NTRS)

    Berger, T. E.; Schrijver, C. J.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Scharmer, G.

    1995-01-01

    We have used an interference filter centered at 4305 A within the bandhead of the CH radical (the 'G band') and real-time image selection at the Swedish Vacuum Solar Telescope on La Palma to produce very high contrast images of subarcsecond photospheric bright points at all locations on the solar disk. During the 6 day period of 15-20 Sept. 1993 we observed active region NOAA 7581 from its appearance on the East limb to a near-disk-center position on 20 Sept. A total of 1804 bright points were selected for analysis from the disk center image using feature extraction image processing techniques. The measured FWHM distribution of the bright points in the image is lognormal with a modal value of 220 km (0.30 sec) and an average value of 250 km (0.35 sec). The smallest measured bright point diameter is 120 km (0.17 sec) and the largest is 600 km (O.69 sec). Approximately 60% of the measured bright points are circular (eccentricity approx. 1.0), the average eccentricity is 1.5, and the maximum eccentricity corresponding to filigree in the image is 6.5. The peak contrast of the measured bright points is normally distributed. The contrast distribution variance is much greater than the measurement accuracy, indicating a large spread in intrinsic bright-point contrast. When referenced to an averaged 'quiet-Sun' area in the image, the modal contrast is 29% and the maximum value is 75%; when referenced to an average intergranular lane brightness in the image, the distribution has a modal value of 61% and a maximum of 119%. The bin-averaged contrast of G-band bright points is constant across the entire measured size range. The measured area of the bright points, corrected for pixelation and selection effects, covers about 1.8% of the total image area. Large pores and micropores occupy an additional 2% of the image area, implying a total area fraction of magnetic proxy features in the image of 3.8%. We discuss the implications of this area fraction measurement in the context of previously published measurements which show that typical active region plage has a magnetic filling factor on the order of 10% or greater. The results suggest that in the active region analyzed here, less than 50% of the small-scale magnetic flux tubes are demarcated by visible proxies such as bright points or pores.

  7. Recovery of zinc and manganese from alkaline and zinc-carbon spent batteries

    NASA Astrophysics Data System (ADS)

    De Michelis, I.; Ferella, F.; Karakaya, E.; Beolchini, F.; Vegliò, F.

    This paper concerns the recovery of zinc and manganese from alkaline and zinc-carbon spent batteries. The metals were dissolved by a reductive-acid leaching with sulphuric acid in the presence of oxalic acid as reductant. Leaching tests were realised according to a full factorial design, then simple regression equations for Mn, Zn and Fe extraction were determined from the experimental data as a function of pulp density, sulphuric acid concentration, temperature and oxalic acid concentration. The main effects and interactions were investigated by the analysis of variance (ANOVA). This analysis evidenced the best operating conditions of the reductive acid leaching: 70% of manganese and 100% of zinc were extracted after 5 h, at 80 °C with 20% of pulp density, 1.8 M sulphuric acid concentration and 59.4 g L -1 of oxalic acid. Both manganese and zinc extraction yields higher than 96% were obtained by using two sequential leaching steps.

  8. Fuel spill identification using solid-phase extraction and solid-phase microextraction. 1. Aviation turbine fuels.

    PubMed

    Lavine, B K; Brzozowski, D M; Ritter, J; Moores, A J; Mayfield, H T

    2001-12-01

    The water-soluble fraction of aviation jet fuels is examined using solid-phase extraction and solid-phase microextraction. Gas chromatographic profiles of solid-phase extracts and solid-phase microextracts of the water-soluble fraction of kerosene- and nonkerosene-based jet fuels reveal that each jet fuel possesses a unique profile. Pattern recognition analysis reveals fingerprint patterns within the data characteristic of fuel type. By using a novel genetic algorithm (GA) that emulates human pattern recognition through machine learning, it is possible to identify features characteristic of the chromatographic profile of each fuel class. The pattern recognition GA identifies a set of features that optimize the separation of the fuel classes in a plot of the two largest principal components of the data. Because principal components maximize variance, the bulk of the information encoded by the selected features is primarily about the differences between the fuel classes.

  9. Assumption-free estimation of the genetic contribution to refractive error across childhood.

    PubMed

    Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy

    2015-01-01

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8-9 years old. Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.

  10. [Factors influencing the quality of life of elderly living in a pre-fabricated housing complex in the Sichuan earthquake area].

    PubMed

    Guo, Hong-Xia; Chen, Hong; Wong, Teresa Bik-Kwan Tsien; Chen, Qian; Au, May-Lan; Li, Yun

    2012-02-01

    The 2008 Sichuan Earthquake caused great damage to the environment and property. In the aftermath, many citizens were relocated to live in newly constructed prefabricated (prefab) communities. This paper explored the current quality of life (QOL) of elderly residents living in prefabricated communities in areas damaged by the Sichuan earthquake and identified factors of influence on QOL values. The ultimate objective was to provide evidence-based guidance for heath improvement measures. The authors used the short form WHOQOL-BREF to assess the quality of life of 191 elderly residents of prefabricated communities in the Sichuan Province 2008 earthquake zone. A Student's t-test, variance analysis, and stepwise multivariate regression methods were used to test the impact of various factors on QOL. Results indicate the self-assessed QOL of participants as good, although scores in the physical (average 56.2) and psychological (average 45.7) domains were significantly lower than the norm in China. Marital status, capital loss in the earthquake, number of children, level of perceived stress, income, interest, and family harmony each correlated with at least one of the short form WHOQOL-BREF domains in t-test and one-way analyses. After excluding for factor interaction effects using multivariate regression, we found interest, family harmony, monthly income and stress to be significant predictors of physical domain QOL, explaining 13.8% of total variance. Family harmony and interest explained 15.3% of total variance for psychological domain QOL; stress, marital status, family harmony, capital loss in the earthquake, number of children and interest explained 19.5% of total variance for social domain QOL; and stress, family harmony and interest explained 16.5% of total variance for environmental domain QOL. Family harmony and interest were significant factors across all domains, while others influenced a smaller proportion. Quality of life for elderly living in prefab communities should be improved. The authors hope study findings will increase awareness among healthcare providers regarding the quality of life of this vulnerable population. Study results suggest that key steps to promoting QOL in this population include improving family harmony, helping to cultivate well-rounded interests, alleviating economic stresses, providing necessary medical and psychological counseling services, and affording more social support.

  11. Assessing DNA recovery from chewing gum.

    PubMed

    Eychner, Alison M; Schott, Kelly M; Elkins, Kelly M

    2017-01-01

    The purpose of this study was to evaluate which DNA extraction method yields the highest quantity of DNA from chewing gum. In this study, several popular extraction methods were tested, including Chelex-100, phenol-chloroform-isoamyl alcohol (PCIA), DNA IQ, PrepFiler, and QIAamp Investigator, and the quantity of DNA recovered from chewing gum was determined using real-time polymerase chain reaction with Quantifiler. Chewed gum control samples were submitted by anonymous healthy adult donors, and discarded environmental chewing gum samples simulating forensic evidence were collected from outside public areas (e.g., campus bus stops, streets, and sidewalks). As expected, results indicate that all methods tested yielded sufficient amplifiable human DNA from chewing gum using the wet-swab method. The QIAamp performed best when DNA was extracted from whole pieces of control gum (142.7 ng on average), and the DNA IQ method performed best on the environmental whole gum samples (29.0 ng on average). On average, the QIAamp kit also recovered the most DNA from saliva swabs. The PCIA method demonstrated the highest yield with wet swabs of the environmental gum (26.4 ng of DNA on average). However, this method should be avoided with whole gum samples (no DNA yield) due to the action of the organic reagents in dissolving and softening the gum and inhibiting DNA recovery during the extraction.

  12. Assessment of collagen changes in ovarian tissue by extracting optical scattering coefficient from OCT images

    NASA Astrophysics Data System (ADS)

    Yang, Yi; Wang, Tianheng; Biswal, Nrusingh; Wang, Xiaohong; Sanders, Melinda; Brewer, Molly; Zhu, Quing

    2012-01-01

    Optical scattering coefficient from ex-vivo unfixed normal and malignant ovarian tissue was quantitatively extracted by fitting optical coherence tomography (OCT) A-line signals to a single scattering model. 1097 average A-line measurements at a wavelength of 1310nm were performed at 108 sites obtained from 18 ovaries. The average scattering coefficient obtained from normal group consisted of 833 measurements from 88 sites was 2.41 mm-1 (+/-0.59), while the average coefficient obtained from malignant group consisted of 264 measurements from 20 sites was 1.55 mm-1 (+/-0.46). Using a threshold of 2 mm-1 for each ovary, a sensitivity of 100% and a specificity of 100% were achieved. The amount of collagen within OCT imaging depth was analyzed from the tissue histological section stained with Sirius Red. The average collagen area fraction (CAF) obtained from normal group was 48.4% (+/-12.3%), while the average CAF obtained from malignant group was 11.4% (+/-4.7%). Statistical significance of the collagen content was found between the two groups (p < 0.001). The preliminary data demonstrated that quantitative extraction of optical scattering coefficient from OCT images could be a potential powerful method for ovarian cancer detection and diagnosis.

  13. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less

  14. Protective Effect of Hydroalcoholic Extract of Tribulus Terrestris on Cisplatin Induced Renal Tissue Damage in Male Mice

    PubMed Central

    Raoofi, Amir; Khazaei, Mozafar; Ghanbari, Ali

    2015-01-01

    Background: According beneficial effects of Tribulus terrestris (TT) extract on tissue damage, the present study investigated the influence of hydroalcoholic extract of TT plant on cisplatin (CIS) (EBEWE Pharma, Unterach, Austria) induced renal tissue damage in male mice. Methods: Thirty mice were divided into five groups (n = 6). The first group (control) was treated with normal saline (0.9% NaCl) and experimental groups with CIS (E1), CIS + 100 mg/kg extract of TT (E2), CIS + 300 mg/kg extract of TT (E3), CIS + 500 mg/kg extract of TT (E4) intraperitoneally. The kidneys were removed after 4 days of injections, and histological evaluations were performed. Results: The data were analyzed using one-way analysis of variance followed by Tukey's post-hoc test, paired-sample t-test, Kruskal–Wallis and Mann–Whitney tests. In the CIS treated group, the whole kidney tissue showed an increased dilatation of Bowman's capsule, medullar congestion, and dilatation of collecting tubules and a decreased in the body weight and kidney weight. These parameters reached to the normal range after administration of fruit extracts of TT for 4 days. Conclusions: The results suggested that the oral administration of TT fruit extract at dose 100, 300 and 500 mg/kg body weight provided protection against the CIS induced toxicity in the mice. PMID:25789143

  15. Method for simulating dose reduction in digital mammography using the Anscombe transformation.

    PubMed

    Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C

    2016-06-01

    This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.

  16. New Software for the Fast Estimation of Population Recombination Rates (FastEPRR) in the Genomic Era.

    PubMed

    Gao, Feng; Ming, Chen; Hu, Wangjie; Li, Haipeng

    2016-06-01

    Genetic recombination is a very important evolutionary mechanism that mixes parental haplotypes and produces new raw material for organismal evolution. As a result, information on recombination rates is critical for biological research. In this paper, we introduce a new extremely fast open-source software package (FastEPRR) that uses machine learning to estimate recombination rate [Formula: see text] (=[Formula: see text]) from intraspecific DNA polymorphism data. When [Formula: see text] and the number of sampled diploid individuals is large enough ([Formula: see text]), the variance of [Formula: see text] remains slightly smaller than that of [Formula: see text] The new estimate [Formula: see text] (calculated by averaging [Formula: see text] and [Formula: see text]) has the smallest variance of all cases. When estimating [Formula: see text], the finite-site model was employed to analyze cases with a high rate of recurrent mutations, and an additional method is proposed to consider the effect of variable recombination rates within windows. Simulations encompassing a wide range of parameters demonstrate that different evolutionary factors, such as demography and selection, may not increase the false positive rate of recombination hotspots. Overall, accuracy of FastEPRR is similar to the well-known method, LDhat, but requires far less computation time. Genetic maps for each human population (YRI, CEU, and CHB) extracted from the 1000 Genomes OMNI data set were obtained in less than 3 d using just a single CPU core. The Pearson Pairwise correlation coefficient between the [Formula: see text] and [Formula: see text] maps is very high, ranging between 0.929 and 0.987 at a 5-Mb scale. Considering that sample sizes for these kinds of data are increasing dramatically with advances in next-generation sequencing technologies, FastEPRR (freely available at http://www.picb.ac.cn/evolgen/) is expected to become a widely used tool for establishing genetic maps and studying recombination hotspots in the population genomic era. Copyright © 2016 Gao et al.

  17. Spatiotemporal variation in diabetes mortality in China: multilevel evidence from 2006 and 2012.

    PubMed

    Zhou, Maigeng; Astell-Burt, Thomas; Yin, Peng; Feng, Xiaoqi; Page, Andrew; Liu, Yunning; Liu, Jiangmei; Li, Yichong; Liu, Shiwei; Wang, Limin; Wang, Lijun; Wang, Linhong

    2015-07-10

    Despite previous studies reporting spatial in equality in diabetes prevalence across China, potential geographic variations in diabetes mortality have not been explored. Age and gender stratified annual diabetes mortality counts for 161 counties were extracted from the China Mortality Surveillance System and interrogated using multilevel negative binomial regression. Random slopes were used to investigate spatiotemporal variation and the proportion of variance explained was used to assess the relative importance of geographical region, urbanization, mean temperature, local diabetes prevalence, behavioral risk factors and relevant biomarkers. Diabetes mortality tended to reduce between 2006 and 2012, though there appeared to be an increase in diabetes mortality in urban (age standardized rate (ASR) 2006-2012: 10.5-13.6) and rural (ASR 10.8-13.0) areas in the Southwest region. A Median Rate Ratio of 1.47, slope variance of 0.006 (SE 0.001) and covariance of 0.268 (SE 0.007) indicated spatiotemporal variation. Fully adjusted models accounted for 37% of this geographical variation, with diabetes mortality higher in the Northwest (RR 2.55, 95% CI 1.74, 3.73) and Northeast (RR 2.68, 95% CI 1.70, 4.21) compared with the South. Diabetes mortality was higher in urbanized areas (RR tertile 3 versus tertile 1 ('RRt3vs1') 1.39, 95% CI 1.17, 1.66), with higher mean body mass index (RRt3vs1 1.46, 95% CI 1.18, 1.80) and with higher average temperatures (RR 1.05 95% CI 1.03, 1.08). Diabetes mortality was lower where consumption of alcohol was excessive (RRt3vs1 0.84, 95% CI 0.72, 0.99). No association was observed with smoking, overconsumption of red meat, high mean sedentary time, systolic blood pressure, cholesterol, and diabetes prevalence. Declines in diabetes mortality between 2006 and 2012 have been unequally distributed across China, which may imply differentials in diagnosis, management, and the provision of services that warrant further investigation.

  18. The Impact of Retardance Pattern Variability on Nerve Fiber Layer Measurements over Time Using GDx with Variable and Enhanced Corneal Compensation

    PubMed Central

    Grewal, Dilraj S.; Sehi, Mitra; Cook, Richard J.

    2011-01-01

    Purpose. To examine the impact of retardance pattern variability on retinal nerve fiber layer (RNFL) measurements over time using scanning laser polarimetry with variable (GDxVCC) and enhanced corneal compensation (GDxECC; both by Carl Zeiss Meditec, Inc., Dublin, CA). Methods. Glaucoma suspect and glaucomatous eyes with 4 years of follow-up participating in the Advanced Imaging in Glaucoma Study were prospectively enrolled. All eyes underwent standard automated perimetry (SAP), GDxVCC, and GDxECC imaging every 6 months. SAP progression was determined with point-wise linear regression analysis of SAP sensitivity values. Typical scan score (TSS) values were extracted as a measure of retardance image quality; an atypical retardation pattern (ARP) was defined as TSS < 80. TSS fluctuation over time was measured using three parameters: change in TSS from baseline, absolute difference (maximum minus minimum TSS value), and TSS variance. Linear mixed-effects models that accommodated the association between the two eyes were constructed to evaluate the relationship between change in TSS and RNFL thickness over time. Results. Eighty-six eyes (51 suspected glaucoma, 35 glaucomatous) of 45 patients were enrolled. Twenty (23.3%) eyes demonstrated SAP progression. There was significantly greater fluctuation in TSS over time with GDxVCC compared with GDxECC as measured by absolute difference (18.40 ± 15.35 units vs. 2.50 ± 4.69 units; P < 0.001), TSS variance (59.63 ± 87.27 units vs. 3.82 ± 9.63 units, P < 0.001), and change in TSS from baseline (−0.83 ± 11.2 vs. 0.25 ± 2.9, P = 0.01). The change in TSS over time significantly (P = 0.006) influenced the TSNIT average RNFL thickness when measured by GDxVCC but not by GDxECC. Conclusions. Longitudinal images obtained with GDxECC have significantly less variability in TSS and retardance patterns and have reduced bias produced by ARP on RNFL progression assessment. PMID:21296821

  19. Data Assimilation by Ensemble Kalman Filter during One-Dimensional Nonlinear Consolidation in Randomly Heterogeneous Highly Compressible Aquitards

    NASA Astrophysics Data System (ADS)

    Zapata Norberto, B.; Morales-Casique, E.; Herrera, G. S.

    2017-12-01

    Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. We explore the effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards by means of 1-D Monte Carlo numerical simulations. 2000 realizations are generated for each of the following parameters: hydraulic conductivity (K), compression index (Cc) and void ratio (e). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system. Random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady state conditions. We further propose a data assimilation scheme by means of ensemble Kalman filter to estimate the ensemble mean distribution of K, pore-pressure and total settlement. We consider the case where pore-pressure measurements are available at given time intervals. We test our approach by generating a 1-D realization of K with exponential spatial correlation, and solving the nonlinear flow and consolidation problem. These results are taken as our "true" solution. We take pore-pressure "measurements" at different times from this "true" solution. The ensemble Kalman filter method is then employed to estimate ensemble mean distribution of K, pore-pressure and total settlement based on the sequential assimilation of these pore-pressure measurements. The ensemble-mean estimates from this procedure closely approximate those from the "true" solution. This procedure can be easily extended to other random variables such as compression index and void ratio.

  20. Psychometric Properties of the Serbian Version of the Maslach Burnout Inventory-Human Services Survey: A Validation Study among Anesthesiologists from Belgrade Teaching Hospitals

    PubMed Central

    Matejić, Bojana; Milenović, Miodrag; Kisić Tepavčević, Darija; Simić, Dušica; Pekmezović, Tatjana; Worley, Jody A.

    2015-01-01

    We report findings from a validation study of the translated and culturally adapted Serbian version of Maslach Burnout Inventory-Human Services Survey (MBI-HSS), for a sample of anesthesiologists working in the tertiary healthcare. The results showed the sufficient overall reliability (Cronbach's α = 0.72) of the scores (items 1–22). The results of Bartlett's test of sphericity (χ 2 = 1983.75, df = 231, p < 0.001) and Kaiser-Meyer-Olkin measure of sampling adequacy (0.866) provided solid justification for factor analysis. In order to increase sensitivity of this questionnaire, we performed unfitted factor analysis model (eigenvalue greater than 1) which enabled us to extract the most suitable factor structure for our study instrument. The exploratory factor analysis model revealed five factors with eigenvalues greater than 1.0, explaining 62.0% of cumulative variance. Velicer's MAP test has supported five-factor model with the smallest average squared correlation of 0,184. This study indicated that Serbian version of the MBI-HSS is a reliable and valid instrument to measure burnout among a population of anesthesiologists. Results confirmed strong psychometric characteristics of the study instrument, with recommendations for interpretation of two new factors that may be unique to the Serbian version of the MBI-HSS. PMID:26090517

  1. Single cell visualization of transcription kinetics variance of highly mobile identical genes using 3D nanoimaging

    PubMed Central

    Annibale, Paolo; Gratton, Enrico

    2015-01-01

    Multi-cell biochemical assays and single cell fluorescence measurements revealed that the elongation rate of Polymerase II (PolII) in eukaryotes varies largely across different cell types and genes. However, there is not yet a consensus whether intrinsic factors such as the position, local mobility or the engagement by an active molecular mechanism of a genetic locus could be the determinants of the observed heterogeneity. Here by employing high-speed 3D fluorescence nanoimaging techniques we resolve and track at the single cell level multiple, distinct regions of mRNA synthesis within the model system of a large transgene array. We demonstrate that these regions are active transcription sites that release mRNA molecules in the nucleoplasm. Using fluctuation spectroscopy and the phasor analysis approach we were able to extract the local PolII elongation rate at each site as a function of time. We measured a four-fold variation in the average elongation between identical copies of the same gene measured simultaneously within the same cell, demonstrating a correlation between local transcription kinetics and the movement of the transcription site. Together these observations demonstrate that local factors, such as chromatin local mobility and the microenvironment of the transcription site, are an important source of transcription kinetics variability. PMID:25788248

  2. Construct Validity and Reliability of the Beliefs Toward Mental Illness Scale for American, Japanese, and Korean Women.

    PubMed

    Saint Arnault, Denise M; Gang, Moonhee; Woo, Seoyoon

    2017-11-01

    The aim of this study was to evaluate the psychometric properties of the Beliefs Toward Mental Illness Scale (BMI) across women from the United States, Japan, and South Korea. A cross-sectional study design was employed. The sample was 564 women aged 21-64 years old who were recruited in the United States and Korea (American = 127, Japanese immigrants in the United States = 204, and Korean = 233). We carried out item analysis, construct validity by confirmatory factor analysis (CFA), and internal consistency using SPSS Version 22 and AMOS Version 22. An acceptable model fit for a 20-item BMI (Beliefs Toward Mental Illness Scale-Revised [BMI-R]) with 3 factors was confirmed using CFA. Construct validity of the BMI-R showed to be all acceptable; convergent validity (average variance extracted [AVE] ≥0.5, construct reliability [CR] ≥0.7) and discriminant validity (r = .65-.89, AVE >.79). The Cronbach's alpha of the BMI-R was .92. These results showed that the BMI was a reliable tool to study beliefs about mental illness across cultures. Our findings also suggested that continued efforts to reduce stigma in culturally specific contexts within and between countries are necessary to promote help-seeking for those suffering from psychological distress.

  3. Validation of the Malay Version of the Inventory of Functional Status after Childbirth Questionnaire

    PubMed Central

    Noor, Norhayati Mohd; Aziz, Aniza Abd.; Mostapa, Mohd Rosmizaki; Awang, Zainudin

    2015-01-01

    Objective. This study was designed to examine the psychometric properties of Malay version of the Inventory of Functional Status after Childbirth (IFSAC). Design. A cross-sectional study. Materials and Methods. A total of 108 postpartum mothers attending Obstetrics and Gynaecology Clinic, in a tertiary teaching hospital in Malaysia, were involved. Construct validity and internal consistency were performed after the translation, content validity, and face validity process. The data were analyzed using Analysis of Moment Structure version 18 and Statistical Packages for the Social Sciences version 20. Results. The final model consists of four constructs, namely, infant care, personal care, household activities, and social and community activities, with 18 items demonstrating acceptable factor loadings, domain to domain correlation, and best fit (Chi-squared/degree of freedom = 1.678; Tucker-Lewis index = 0.923; comparative fit index = 0.936; and root mean square error of approximation = 0.080). Composite reliability and average variance extracted of the domains ranged from 0.659 to 0.921 and from 0.499 to 0.628, respectively. Conclusion. The study suggested that the four-factor model with 18 items of the Malay version of IFSAC was acceptable to be used to measure functional status after childbirth because it is valid, reliable, and simple. PMID:25667932

  4. Psychometric Properties of the Serbian Version of the Maslach Burnout Inventory-Human Services Survey: A Validation Study among Anesthesiologists from Belgrade Teaching Hospitals.

    PubMed

    Matejić, Bojana; Milenović, Miodrag; Kisić Tepavčević, Darija; Simić, Dušica; Pekmezović, Tatjana; Worley, Jody A

    2015-01-01

    We report findings from a validation study of the translated and culturally adapted Serbian version of Maslach Burnout Inventory-Human Services Survey (MBI-HSS), for a sample of anesthesiologists working in the tertiary healthcare. The results showed the sufficient overall reliability (Cronbach's α = 0.72) of the scores (items 1-22). The results of Bartlett's test of sphericity (χ(2) = 1983.75, df = 231, p < 0.001) and Kaiser-Meyer-Olkin measure of sampling adequacy (0.866) provided solid justification for factor analysis. In order to increase sensitivity of this questionnaire, we performed unfitted factor analysis model (eigenvalue greater than 1) which enabled us to extract the most suitable factor structure for our study instrument. The exploratory factor analysis model revealed five factors with eigenvalues greater than 1.0, explaining 62.0% of cumulative variance. Velicer's MAP test has supported five-factor model with the smallest average squared correlation of 0,184. This study indicated that Serbian version of the MBI-HSS is a reliable and valid instrument to measure burnout among a population of anesthesiologists. Results confirmed strong psychometric characteristics of the study instrument, with recommendations for interpretation of two new factors that may be unique to the Serbian version of the MBI-HSS.

  5. Enhancing Electromagnetic Side-Channel Analysis in an Operational Environment

    NASA Astrophysics Data System (ADS)

    Montminy, David P.

    Side-channel attacks exploit the unintentional emissions from cryptographic devices to determine the secret encryption key. This research identifies methods to make attacks demonstrated in an academic environment more operationally relevant. Algebraic cryptanalysis is used to reconcile redundant information extracted from side-channel attacks on the AES key schedule. A novel thresholding technique is used to select key byte guesses for a satisfiability solver resulting in a 97.5% success rate despite failing for 100% of attacks using standard methods. Two techniques are developed to compensate for differences in emissions from training and test devices dramatically improving the effectiveness of cross device template attacks. Mean and variance normalization improves same part number attack success rates from 65.1% to 100%, and increases the number of locations an attack can be performed by 226%. When normalization is combined with a novel technique to identify and filter signals in collected traces not related to the encryption operation, the number of traces required to perform a successful attack is reduced by 85.8% on average. Finally, software-defined radios are shown to be an effective low-cost method for collecting side-channel emissions in real-time, eliminating the need to modify or profile the target encryption device to gain precise timing information.

  6. European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30: factorial models to Brazilian cancer patients

    PubMed Central

    Campos, Juliana Alvares Duarte Bonini; Spexoto, Maria Cláudia Bernardes; da Silva, Wanderson Roberto; Serrano, Sergio Vicente; Marôco, João

    2018-01-01

    ABSTRACT Objective To evaluate the psychometric properties of the seven theoretical models proposed in the literature for European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 (EORTC QLQ-C30), when applied to a sample of Brazilian cancer patients. Methods Content and construct validity (factorial, convergent, discriminant) were estimated. Confirmatory factor analysis was performed. Convergent validity was analyzed using the average variance extracted. Discriminant validity was analyzed using correlational analysis. Internal consistency and composite reliability were used to assess the reliability of instrument. Results A total of 1,020 cancer patients participated. The mean age was 53.3±13.0 years, and 62% were female. All models showed adequate factorial validity for the study sample. Convergent and discriminant validities and the reliability were compromised in all of the models for all of the single items referring to symptoms, as well as for the “physical function” and “cognitive function” factors. Conclusion All theoretical models assessed in this study presented adequate factorial validity when applied to Brazilian cancer patients. The choice of the best model for use in research and/or clinical protocols should be centered on the purpose and underlying theory of each model. PMID:29694609

  7. Development of a problematic mobile phone use scale for Turkish adolescents.

    PubMed

    Güzeller, Cem Oktay; Coşguner, Tolga

    2012-04-01

    Abstract The aim of this study was to evaluate the psychometric properties of the Problematic Mobile Phone Use Scale (PMPUS) for Turkish Adolescents. The psychometric properties of PMPUS were tested in two separate sample groups that consisted of 950 Turkish high school students. The first sample group (n=309) was used to determine the factor structure of the scale. The second sample group (n=461) was used to test data conformity with the identified structure, discriminant validity and concurrent scale validity, internal consistency reliability calculations, and item statistics calculations. The results of exploratory factor analyses indicated that the scale had three factors: interference with negative effect, compulsion/persistence, and withdrawal/tolerance. The results showed that item and construct reliability values yielded satisfactory rates in general for the three-factor construct. On the other hand, the average variance extracted value remained below the scale value for three subscales. The scores for the scale significantly correlated with depression and loneliness. In addition, the discriminant validity value was above the scale in all sub-dimensions except one. Based on these data, the reliability of the PMPUS scale appears to be satisfactory and provides good internal consistency. Therefore, with limited exception, the PMPUS was found to be reliable and valid in the context of Turkish adolescents.

  8. Chemical composition and biological activity of star anise Illicium verum extracts against maize weevil, Sitophilus zeamais adults

    PubMed Central

    Wei, Linlin; Hua, Rimao; Li, Maoye; Huang, Yanzhang; Li, Shiguang; He, Yujie; Shen, Zonghai

    2014-01-01

    Abstract This study aims to develop eco-friendly botanical pesticides. Dried fruits of star anise ( Illicium verum Hook.f. (Austrobaileyales: Schisandraceae)) were extracted with methyl alcohol (MA), ethyl acetate (EA), and petroleum ether (PE) at 25°C. The constituents were determined by gas chromatography-mass spectrometry, and the repellency and contact toxicity of the extracts against Sitophilus zeamais Motschulsky (Coleoptera: Curculionidae) adults were tested. Forty-four compounds, whose concentrations were more than 0.2%, were separated and identified from the MA, EA, and PE extracts. The extraction yields of trans-anethole, the most abundant biologically active compound in I. verum , were 9.7%, 7.5%, and 10.1% in the MA, EA, and PE extracts, respectively. Repellency increased with increasing extract dose. The average repellency rate of the extracts against S. zeamais adults peaked at 125.79 µg/cm 2 72 hr after treatment. The percentage repellency of the EA extract reached 76.9%, making it a class IV repellent. Contact toxicity assays showed average mortalities of 85.4% (MA), 94.5% (EA), and 91.1% (PE). The EA extract had the lowest median lethal dose, at 21.2 µg/cm 2 72 hr after treatment. The results suggest that I. verum fruit extracts and trans-anethole can potentially be developed as a grain protectant to control stored-product insect pests. Other active constituents in the EA extract merit further research. PMID:25368036

  9. Prediction of activity-related energy expenditure using accelerometer-derived physical activity under free-living conditions: a systematic review.

    PubMed

    Jeran, S; Steinbrecher, A; Pischon, T

    2016-08-01

    Activity-related energy expenditure (AEE) might be an important factor in the etiology of chronic diseases. However, measurement of free-living AEE is usually not feasible in large-scale epidemiological studies but instead has traditionally been estimated based on self-reported physical activity. Recently, accelerometry has been proposed for objective assessment of physical activity, but it is unclear to what extent this methods explains the variance in AEE. We conducted a systematic review searching MEDLINE database (until 2014) on studies that estimated AEE based on accelerometry-assessed physical activity in adults under free-living conditions (using doubly labeled water method). Extracted study characteristics were sample size, accelerometer (type (uniaxial, triaxial), metrics (for example, activity counts, steps, acceleration), recording period, body position, wear time), explained variance of AEE (R(2)) and number of additional predictors. The relation of univariate and multivariate R(2) with study characteristics was analyzed using nonparametric tests. Nineteen articles were identified. Examination of various accelerometers or subpopulations in one article was treated separately, resulting in 28 studies. Sample sizes ranged from 10 to 149. In most studies the accelerometer was triaxial, worn at the trunk, during waking hours and reported activity counts as output metric. Recording periods ranged from 5 to 15 days. The variance of AEE explained by accelerometer-assessed physical activity ranged from 4 to 80% (median crude R(2)=26%). Sample size was inversely related to the explained variance. Inclusion of 1 to 3 other predictors in addition to accelerometer output significantly increased the explained variance to a range of 12.5-86% (median total R(2)=41%). The increase did not depend on the number of added predictors. We conclude that there is large heterogeneity across studies in the explained variance of AEE when estimated based on accelerometry. Thus, data on predicted AEE based on accelerometry-assessed physical activity need to be interpreted cautiously.

  10. Statistical Analysis of SSMIS Sea Ice Concentration Threshold at the Arctic Sea Ice Edge during Summer Based on MODIS and Ship-Based Observational Data.

    PubMed

    Ji, Qing; Li, Fei; Pang, Xiaoping; Luo, Cong

    2018-04-05

    The threshold of sea ice concentration (SIC) is the basis for accurately calculating sea ice extent based on passive microwave (PM) remote sensing data. However, the PM SIC threshold at the sea ice edge used in previous studies and released sea ice products has not always been consistent. To explore the representable value of the PM SIC threshold corresponding on average to the position of the Arctic sea ice edge during summer in recent years, we extracted sea ice edge boundaries from the Moderate-resolution Imaging Spectroradiometer (MODIS) sea ice product (MOD29 with a spatial resolution of 1 km), MODIS images (250 m), and sea ice ship-based observation points (1 km) during the fifth (CHINARE-2012) and sixth (CHINARE-2014) Chinese National Arctic Research Expeditions, and made an overlay and comparison analysis with PM SIC derived from Special Sensor Microwave Imager Sounder (SSMIS, with a spatial resolution of 25 km) in the summer of 2012 and 2014. Results showed that the average SSMIS SIC threshold at the Arctic sea ice edge based on ice-water boundary lines extracted from MOD29 was 33%, which was higher than that of the commonly used 15% discriminant threshold. The average SIC threshold at sea ice edge based on ice-water boundary lines extracted by visual interpretation from four scenes of the MODIS image was 35% when compared to the average value of 36% from the MOD29 extracted ice edge pixels for the same days. The average SIC of 31% at the sea ice edge points extracted from ship-based observations also confirmed that choosing around 30% as the SIC threshold during summer is recommended for sea ice extent calculations based on SSMIS PM data. These results can provide a reference for further studying the variation of sea ice under the rapidly changing Arctic.

  11. Supercritical multicomponent solvent coal extraction

    NASA Technical Reports Server (NTRS)

    Corcoran, W. H.; Fong, W. S.; Pichaichanarong, P.; Chan, P. C. F.; Lawson, D. D. (Inventor)

    1983-01-01

    The yield of organic extract from the supercritical extraction of coal with larger diameter organic solvents such as toluene is increased by use of a minor amount of from 0.1 to 10% by weight of a second solvent such as methanol having a molecular diameter significantly smaller than the average pore diameter of the coal.

  12. Feasibility of harvesting southern hardwood trees by extraction

    Treesearch

    Donald L. Sirois

    1977-01-01

    A Rome TXH Tree Extractor was used to explore the harvesting of four species of southern hardwoods by extraction. The test indicate that harvesting by extraction is feasible for harvestang, if tree size is limited to 9 inches DBH or less. Stump and below ground biomass averaged 18 percent of total tree biomass.

  13. [CONTENT OF OXIDATIVE STRESS MARKERS IN BLOOD PLASMA UNDER THE ACTION OF EXTRACTS OF GRATIOLA OFFICINALIS L., HELICHRYSUM ARENARIUM (L.) MOENCH, AND ANTHOCYANIN FORMS OF ZEA MAYS L].

    PubMed

    Durnova, N A; Afanas'eva, G A; Kurchatova, M N; Zaraeva, N V; Golikov, A G; Bucharskaya, A B; Golikov, A G; Bucharskaya, A B; Plastun, V O; Andreeva, N V

    2015-01-01

    The effect of aqueous solutions of dry ethanol extracts of Gratiola officinalis L., Helichrysum arenarium (L.) Moench, and anthocyanin forms of Zea mays L. on the dioxidin-induced lipid peroxidation in blood has been studied on rats. It is established that all these extracts are capable of reducing the amount of avera- ge-mass (AM) molecules and malonic dialdehyde (MDA) in rat blood plasma. The extract of Gratiola officinalis L. reduces the concentration of AM and MDA moleules by 43%. The extract of Helichrysum arenarium (L.) Moench reduces the concentration of AM molecules on the average by 18.66% (within 9.22 -34.81%) and MDA by 49.36% (within 34.12-79.75%). The Extract of anthocyanin forms of Zea mays L. does not reduce the concentration of AM mo- lecules, but reduces the amount of MDA in the blood of rats on average by 27.88% (within 21.58-37.82%) (p < 0.01).

  14. Feasibility of Coherent and Incoherent Backscatter Experiments from the AMPS Laboratory. Technical Section

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.

  15. The SAT Prediction of Grades for Mexican-American Versus Anglo-American Students at the University of California, Riverside.

    ERIC Educational Resources Information Center

    Goldman, Roy D.; Richards, Regina

    The predictive validity of the Scholastic Aptitude Test (SAT) for Mexican-Americans is investigated. Forty-two Mexican-American freshmen students who entered the University of California, Riverside, in the Fall 1971 participated in the study. Analyses of variance concerning ethnic groups on GPA (grade point average) and SAT verbal (SATV) and math…

  16. Health-Related Variables and Academic Performance among First-Year College Students: Implications for Sleep and Other Behaviors.

    ERIC Educational Resources Information Center

    Trockel, Mickey T.; Barnes, Michael D.; Egget, Dennis L.

    2000-01-01

    Analyzed the effect of several health behaviors and health-related variables on college freshmen's grade point averages (GPAs). Survey data indicated that sleep habits, particularly wake-up time, accounted for the most variance in GPAs. Higher GPAs related to strength training and study of spiritually oriented material. Lower GPAs related to…

  17. Implementation of learning outcome attainment measurement system in aviation engineering higher education

    NASA Astrophysics Data System (ADS)

    Salleh, I. Mohd; Mat Rani, M.

    2017-12-01

    This paper aims to discuss the effectiveness of the Learning Outcome Attainment Measurement System in assisting Outcome Based Education (OBE) for Aviation Engineering Higher Education in Malaysia. Direct assessments are discussed to show the implementation processes that become a key role in the successful outcome measurement system. A case study presented in this paper involves investigation on the implementation of the system in Aircraft Structure course for Bachelor in Aircraft Engineering Technology program in UniKL-MIAT. The data has been collected for five semesters, starting from July 2014 until July 2016. The study instruments used include the report generated in Learning Outcomes Measurements System (LOAMS) that contains information on the course learning outcomes (CLO) individual and course average performance reports. The report derived from LOAMS is analyzed and the data analysis has revealed that there is a positive significant correlation between the individual performance and the average performance reports. The results for analysis of variance has further revealed that there is a significant difference in OBE grade score among the report. Independent samples F-test results, on the other hand, indicate that the variances of the two populations are unequal.

  18. Friends in low places: The impact of locations and companions on 21st birthday drinking.

    PubMed

    Rodriguez, Lindsey M; Young, Chelsie M; Tomkins, Mary M; DiBello, Angelo M; Krieger, Heather; Neighbors, Clayton

    2016-01-01

    The present research examined how various locations and companions were associated with hazardous drinking during 21st birthday celebrations. The sample included 912 college students (57% female) who completed an online survey to examine 21st birthday drinking. Locations included bars, friends' houses, restaurants, outdoor barbecues, homes, parents' homes, and Fraternity/Sorority houses. Companions included friends, family members, casual acquaintances, roommates, significant others, Fraternity/Sorority members, and none (alone). Participants consumed an average of 7.6 drinks and reached an average eBAC of .15 during their 21st birthday celebrations. Locations accounted for 20%/18% of the variance in number of drinks and eBAC, respectively, whereas companions accounted for 23%/20% of the variance. Drinking with romantic partners was associated with less drinking, whereas drinking with Fraternity/Sorority members was associated with more drinking. Stepwise regressions combining locations and companions suggested that, overall, celebrating in a bar setting and with Fraternity and Sorority members were the strongest variables associated with drinking. With the exception of a bar setting, companions were the most important contextual factors associated with 21st birthday drinking. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Brief report: understanding intention to be physically active and physical activity behaviour in adolescents from a low socio-economic status background: an application of the Theory of Planned Behaviour.

    PubMed

    Duncan, Michael J; Rivis, Amanda; Jordan, Caroline

    2012-06-01

    The aim of this brief report is to report on the utility of the Theory of Planned Behaviour (TPB) for predicting the physical activity intentions and behaviour of British adolescents from lower-than-average socio-economic backgrounds. A prospective questionnaire design was employed with 197, 13-14 year olds (76 males, 121 females). At time 1 participant completed standard measures of TPB variables. One week later (Time 2), participants completed the Physical Activity Questionnaire for Adolescents (PAQ-A) as a measure of physical activity behaviour. Hierarchical regression analyses showed that attitude and perceived behavioural control jointly accounted for 25% of the variance in intention (p = 0.0001). Perceived behavioural control emerged as the only significant predictor of physical activity behaviour and explained 3.7% of the variance (p = 0.001). Therefore, attitude and PBC successfully predicts intention towards physical activity and PBC predicts physical activity behaviour in British adolescents from lower-than-average socio-economic backgrounds. Copyright © 2011 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  20. δ2H isotopic flux partitioning of evapotranspiration over a grass field following a water pulse and subsequent dry down

    NASA Astrophysics Data System (ADS)

    Good, Stephen P.; Soderberg, Keir; Guan, Kaiyu; King, Elizabeth G.; Scanlon, Todd M.; Caylor, Kelly K.

    2014-02-01

    The partitioning of surface vapor flux (FET) into evaporation (FE) and transpiration (FT) is theoretically possible because of distinct differences in end-member stable isotope composition. In this study, we combine high-frequency laser spectroscopy with eddy covariance techniques to critically evaluate isotope flux partitioning of FET over a grass field during a 15 day experiment. Following the application of a 30 mm water pulse, green grass coverage at the study site increased from 0 to 10% of ground surface area after 6 days and then began to senesce. Using isotope flux partitioning, transpiration increased as a fraction of total vapor flux from 0% to 40% during the green-up phase, after which this ratio decreased while exhibiting hysteresis with respect to green grass coverage. Daily daytime leaf-level gas exchange measurements compare well with daily isotope flux partitioning averages (RMSE = 0.0018 g m-2 s-1). Overall the average ratio of FT to FET was 29%, where uncertainties in Keeling plot intercepts and transpiration composition resulted in an average of uncertainty of ˜5% in our isotopic partitioning of FET. Flux-variance similarity partitioning was partially consistent with the isotope-based approach, with divergence occurring after rainfall and when the grass was stressed. Over the average diurnal cycle, local meteorological conditions, particularly net radiation and relative humidity, are shown to control partitioning. At longer time scales, green leaf area and available soil water control FT/FET. Finally, we demonstrate the feasibility of combining isotope flux partitioning and flux-variance similarity theory to estimate water use efficiency at the landscape scale.

  1. Duration of surgical-orthodontic treatment.

    PubMed

    Häll, Birgitta; Jämsä, Tapio; Soukka, Tero; Peltomäki, Timo

    2008-10-01

    To study the duration of surgical-orthodontic treatment with special reference to patients' age and the type of tooth movements, i.e. extraction vs. non-extraction and intrusion before or extrusion after surgery to level the curve of Spee. The material consisted files of 37 consecutive surgical-orthodontic patients. The files were reviewed and gender, diagnosis, type of malocclusion, age at the initiation of treatment, duration of treatment, type of tooth movements (extraction vs. non-extraction and levelling of the curve of Spee before or after operation) and type of operation were retrieved. For statistical analyses two sample t-test, Kruskal-Wallis and Spearman rank correlation tests were used. Mean treatment duration of the sample was 26.8 months, of which pre-surgical orthodontics took on average 17.5 months. Patients with extractions as part of the treatment had statistically and clinically significantly longer treatment duration, on average 8 months, than those without extractions. No other studied variable seemed to have an impact on the treatment time. The present small sample size prevents reliable conclusions to be made. However, the findings suggest, and patients should be informed, that extractions included in the treatment plan increase chances of longer duration of surgical-orthodontic treatment.

  2. The effect of developer age on the detection of approximal caries using three dental films.

    PubMed

    Syriopoulos, K; Velders, X L; Sanderink, G C; van Ginkel, F C; van Amerongen, J P; van der Stelt, P F

    1999-07-01

    To compare the diagnostic accuracy for the detection of approximal caries of three dental X-ray films using fresh and aged processing chemicals. Fifty-six extracted unrestored premolars were radiographed under standardized conditions using the new Dentus M2 (Agfa-Gevaert, Mortsel, Belgium), Ektaspeed Plus and Ultra-speed (Kodak Eastman Co, Rochester, USA) dental films. The films were processed manually using Agfa chemicals (Heraeus Kulzer, Dormagen, Germany). The procedure was repeated once a week until the complete exhaustion of the chemicals (6 weeks). Three independent observers assessed 210 radiographs using the following rating scale: 0 = sound, 1 = enamel lesion; 2 = lesion reaching the ADJ; 3 = dentinal lesion. True caries depth was determined by histological examination (14 sound surfaces, 11 enamel lesions, eight lesions reaching the ADJ and 23 dentinal lesions). True caries depth was subtracted from the values given by the observers and an analysis of variance was performed. The null hypothesis was rejected when P < 0.05. No significant differences were found in the diagnostic accuracy between the three films when using chemicals of up to 3 weeks old (P = 0.056). After the third week, Ultra-speed was significantly better than the other two films (P = 0.012). On average caries depth was underestimated. A similar level of diagnostic accuracy for approximal caries is achieved when using the three films. Dentus M2 and Ektaspeed Plus are at present the fastest available films and they should therefore be recommended for clinical practice. Agfa chemicals should be renewed every 3 weeks. Fifty per cent reduction in average gradient is indicative of renewing processing chemicals.

  3. Countermovement jump height: gender and sport-specific differences in the force-time variables.

    PubMed

    Laffaye, Guillaume; Wagner, Phillip P; Tombleson, Tom I L

    2014-04-01

    The goal of this study was to assess (a) the eccentric rate of force development, the concentric force, and selected time variables on vertical performance during countermovement jump, (b) the existence of gender differences in these variables, and (c) the sport-specific differences. The sample was composed of 189 males and 84 females, all elite athletes involved in college and professional sports (primarily football, basketball, baseball, and volleyball). The subjects performed a series of 6 countermovement jumps on a force plate (500 Hz). Average eccentric rate of force development (ECC-RFD), total time (TIME), eccentric time (ECC-T), Ratio between eccentric and total time (ECC-T:T) and average force (CON-F) were extracted from force-time curves and the vertical jumping performance, measured by impulse momentum. Results show that CON-F (r = 0.57; p < 0.001) and ECC-RFD (r = 0.52, p < 0.001) are strongly correlated with the jump height (JH), whereas the time variables are slightly and negatively correlated (r = -0.21-0.23, p < 0.01). Force variables differ between both sexes (p < 0.01), whereas time variables did not differ, showing a similar temporal structure. The best way to jump high is to increase CON-F and ECC-RFD thus minimizing the ECC-T. Principal component analysis (PCA) accounted for 76.8% of the JH variance and revealed that JH is predicted by a temporal and a force component. Furthermore, the PCA comparison made among athletes revealed sport-specific signatures: volleyball players revealed a temporal-prevailing profile, a weak-force with large ECC-T:T for basketball players and explosive and powerful profiles for football and baseball players.

  4. Recovery correction technique for NMR spectroscopy of perchloric acid extracts using DL-valine-2,3-d2: validation and application to 5-fluorouracil-induced brain damage.

    PubMed

    Nakagami, Ryutaro; Yamaguchi, Masayuki; Ezawa, Kenji; Kimura, Sadaaki; Hamamichi, Shusei; Sekine, Norio; Furukawa, Akira; Niitsu, Mamoru; Fujii, Hirofumi

    2014-01-01

    We explored a recovery correction technique that can correct metabolite loss during perchloric acid (PCA) extraction and minimize inter-assay variance in quantitative (1)H nuclear magnetic resonance (NMR) spectroscopy of the brain and evaluated its efficacy in 5-fluorouracil (5-FU)- and saline-administered rats. We measured the recovery of creatine and dl-valine-2,3-d2 from PCA extract containing both compounds (0.5 to 8 mM). We intravenously administered either 5-FU for 4 days (total, 100 mg/kg body weight) or saline into 2 groups of 11 rats each. We subsequently performed PCA extraction of the whole brain on Day 9, externally adding 7 µmol of dl-valine-2,3-d2. We estimated metabolite concentrations using an NMR spectrometer with recovery correction, correcting metabolite concentrations based on the recovery factor of dl-valine-2,3-d2. For each metabolite concentration, we calculated the coefficient of variation (CEV) and compared differences between the 2 groups using unpaired t-test. Equivalent recoveries of dl-valine-2,3-d2 (89.4 ± 3.9%) and creatine (89.7 ± 3.9%) in the PCA extract of the mixed solution indicated the suitability of dl-valine-2,3-d2 as an internal reference. In the rat study, recovery of dl-valine-2,3-d2 was 90.6 ± 9.2%. Nine major metabolite concentrations adjusted by recovery of dl-valine-2,3-d2 in saline-administered rats were comparable to data in the literature. CEVs of these metabolites were reduced from 10 to 17% before to 7 to 16% after correction. The significance of differences in alanine and taurine between the 5-FU- and saline-administered groups was determined only after recovery correction (0.75 ± 0.12 versus 0.86 ± 0.07 for alanine; 5.17 ± 0.59 versus 5.66 ± 0.42 for taurine [µmol/g brain tissue]; P < 0.05). A new recovery correction technique corrected metabolite loss during PCA extraction, minimized inter-assay variance in quantitative (1)H NMR spectroscopy of brain tissue, and effectively detected inter-group differences in concentrations of brain metabolites between 5-FU- and saline-administered rats.

  5. Recovery of Technetium Adsorbed on Charcoal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Mark D.; Metz, Lori A.; Ballou, Nathan E.

    2006-05-01

    Two methods capable of near complete recovery of technetium adsorbed on charcoal are presented. The first involves liquid extraction of the technetium from the charcoal by hot 4M nitric acid. An average recovery of 98% (n=3) is obtained after three rounds of extraction. The second method involves dry ashing with air in a quartz combustion tube at 400-450 C. This method yields an average recovery of 96% (n=5). Other thermal methods were attempted, but resulted in reduced recovery and incomplete material balance

  6. Dispersion of aerosol particles undergoing Brownian motion

    NASA Astrophysics Data System (ADS)

    Alonso, Manuel; Endo, Yoshiyuki

    2001-12-01

    The variance of the position distribution for a Brownian particle is derived in the general case where the particle is suspended in a flowing medium and, at the same time, is acted upon by an external field of force. It is shown that, for uniform force and flow fields, the variance is equal to that for a free particle. When the force field is not uniform but depends on spatial location, the variance can be larger or smaller than that for a free particle depending on whether the average motion of the particles takes place toward, respectively, increasing or decreasing absolute values of the field strength. A few examples concerning aerosol particles are discussed, with especial attention paid to the mobility classification of charged aerosols by a non-uniform electric field. As a practical application of these ideas, a new design of particle-size electrostatic classifier differential mobility analyser (DMA) is proposed in which the aerosol particles migrate between the electrodes in a direction opposite to that for a conventional DMA, thereby improving the resolution power of the instrument.

  7. Evaluation of tomotherapy MVCT image enhancement program for tumor volume delineation

    PubMed Central

    Martin, Spencer; Rodrigues, George; Chen, Quan; Pavamani, Simon; Read, Nancy; Ahmad, Belal; Hammond, J. Alex; Venkatesan, Varagur; Renaud, James

    2011-01-01

    The aims of this study were to investigate the variability between physicians in delineation of head and neck tumors on original tomotherapy megavoltage CT (MVCT) studies and corresponding software enhanced MVCT images, and to establish an optimal approach for evaluation of image improvement. Five physicians contoured the gross tumor volume (GTV) for three head and neck cancer patients on 34 original and enhanced MVCT studies. Variation between original and enhanced MVCT studies was quantified by DICE coefficient and the coefficient of variance. Based on volume of agreement between physicians, higher correlation in terms of average DICE coefficients was observed in GTV delineation for enhanced MVCT for patients 1, 2, and 3 by 15%, 3%, and 7%, respectively, while delineation variance among physicians was reduced using enhanced MVCT for 12 of 17 weekly image studies. Enhanced MVCT provides advantages in reduction of variance among physicians in delineation of the GTV. Agreement on contouring by the same physician on both original and enhanced MVCT was equally high. PACS numbers: 87.57.N‐, 87.57.np, 87.57.nt

  8. Constraining Particle Variation in Lunar Regolith for Simulant Design

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Doug; Stoeser, Douglas; Hoelzer, Hans

    2008-01-01

    Simulants are used by the lunar engineering community to develop and test technologies for In Situ Resource Utilization (ISRU), excavation and drilling, and for mitigation of hazards to machinery and human health. Working with the United States Geological Survey (USGS), other NASA centers, private industry and academia, Marshall Space Flight Center (MSFC) is leading NASA s lunar regolith simulant program. There are two main efforts: simulant production and simulant evaluation. This work requires a highly detailed understanding of regolith particle type, size, and shape distribution, and of bulk density. The project has developed Figure of Merit (FoM) algorithms to quantitatively compare these characteristics between two materials. The FoM can be used to compare two lunar regolith samples, regolith to simulant, or two parcels of simulant. In work presented here, we use the FoM algorithm to examine the variance of particle type in Apollo 16 highlands regolith core and surface samples. For this analysis we have used internally consistent particle type data for the 90-150 m fraction of Apollo core 64001/64002 from station 4, core 60009/60010 from station 10, and surface samples from various Apollo 16 stations. We calculate mean modal compositions for each core and for the group of surface samples and quantitatively compare samples of each group to its mean as a measurement of within-group variance; we also calculate an FoM for every sample against the mean composition of 64001/64002. This gives variation with depth at two locations and between Apollo 16 stations. Of the tested groups, core 60009/60010 has the highest internal variance with an average FoM score of 0.76 and core 64001/64002 has the lowest with an average FoM of 0.92. The surface samples have a low but intermediate internal variance with an average FoM of 0.79. FoM s calculated against the 64001/64002 mean reference composition range from 0.79-0.97 for 64001/64002, from 0.41-0.91 for 60009/60010, and from 0.54-0.93 for the surface samples. Six samples fall below 0.70, and they are also the least mature (i.e., have the lowest I(sub s)/FeO). Because agglutinates are the dominant particle type and the agglutinate population increases with sample maturity (I(sub s)/FeO), the maturity of the sample relative to the reference is a prime determinant of the particle type FoM score within these highland samples.

  9. Development of Extraction Tests for Determining the Bioavailability of Metals in Soil

    DTIC Science & Technology

    2005-06-01

    Liability Information System COV coefficient of variance Cr(III) trivalent chromium Cr(VI) hexavalent chromium DCB dithionite citrate bicarbonate...indicated that bioavailability was a less important issue for chromium than understanding the form of chromium (i.e., trivalent or hexavalent) that is...7.3.3 Chromium 50 7.3.4 Lead 50 7.3.5 Summary of In Vitro Testing for Wildlife Receptors 51 7.4 References 51 Supplemental Materials for

  10. Design, Development, and Evaluation of a Novel Retraction Device for Gallbladder Extraction During Laparoscopic Cholecystectomy

    PubMed Central

    Judge, Joshua M.; Stukenborg, George J.; Johnston, William F.; Guilford, William H.; Slingluff, Craig L.; Hallowell, Peter T.

    2015-01-01

    Background A source of frustration during laparoscopic cholecystectomy involves extraction of the gallbladder through port sites smaller than the gallbladder itself. We describe the development and testing of a novel device for the safe, minimal enlargement of laparoscopic port sites to extract large, stone-filled gallbladders from the abdomen. Methods The study device consists of a handle with a retraction tongue to shield the specimen and a guide for a scalpel to incise the fascia within the incision. Patients enrolled underwent laparoscopic cholecystectomy. Gallbladder extraction was attempted. If standard measures failed, the device was implemented. Extraction time and device utility scores were recorded for each patient. Patients returned 3 - 4 weeks post-operatively for assessment of pain level, cosmetic effect, and presence of infectious complications. Results Twenty (51%) of 39 patients required the device. Average extraction time for the first 8 patients was 120 seconds. After interim analysis, an improved device was used in twelve patients, and average extraction time was 24 seconds. There were no adverse events. Post-operative pain ratings and incision cosmesis were comparable between patients with and without use of the device. Conclusion The study device enables safe and rapid extraction of impacted gallbladders through the abdominal wall. PMID:23897085

  11. Shared genetic variance between obesity and white matter integrity in Mexican Americans.

    PubMed

    Spieker, Elena A; Kochunov, Peter; Rowland, Laura M; Sprooten, Emma; Winkler, Anderson M; Olvera, Rene L; Almasy, Laura; Duggirala, Ravi; Fox, Peter T; Blangero, John; Glahn, David C; Curran, Joanne E

    2015-01-01

    Obesity is a chronic metabolic disorder that may also lead to reduced white matter integrity, potentially due to shared genetic risk factors. Genetic correlation analyses were conducted in a large cohort of Mexican American families in San Antonio (N = 761, 58% females, ages 18-81 years; 41.3 ± 14.5) from the Genetics of Brain Structure and Function Study. Shared genetic variance was calculated between measures of adiposity [(body mass index (BMI; kg/m(2)) and waist circumference (WC; in)] and whole-brain and regional measurements of cerebral white matter integrity (fractional anisotropy). Whole-brain average and regional fractional anisotropy values for 10 major white matter tracts were calculated from high angular resolution diffusion tensor imaging data (DTI; 1.7 × 1.7 × 3 mm; 55 directions). Additive genetic factors explained intersubject variance in BMI (heritability, h (2) = 0.58), WC (h (2) = 0.57), and FA (h (2) = 0.49). FA shared significant portions of genetic variance with BMI in the genu (ρG = -0.25), body (ρG = -0.30), and splenium (ρG = -0.26) of the corpus callosum, internal capsule (ρG = -0.29), and thalamic radiation (ρG = -0.31) (all p's = 0.043). The strongest evidence of shared variance was between BMI/WC and FA in the superior fronto-occipital fasciculus (ρG = -0.39, p = 0.020; ρG = -0.39, p = 0.030), which highlights region-specific variation in neural correlates of obesity. This may suggest that increase in obesity and reduced white matter integrity share common genetic risk factors.

  12. Sexual selection in a lekking bird: the relative opportunity for selection by female choice and male competition.

    PubMed

    DuVal, Emily H; Kempenaers, Bart

    2008-09-07

    Leks are classic models for studies of sexual selection due to extreme variance in male reproductive success, but the relative influence of intrasexual competition and female mate choice in creating this skew is debatable. In the lekking lance-tailed manakin (Chiroxiphia lanceolata), these selective episodes are temporally separated into intrasexual competition for alpha status and female mate choice among alpha males that rarely interact. Variance in reproductive success between status classes of adult males (alpha versus non-alpha) can therefore be attributed to male-male competition whereas that within status largely reflects female mate choice. This provides an excellent opportunity for quantifying the relative contribution of each of these mechanisms of sexual selection to the overall opportunity for sexual selection on males (I males). To calculate variance in actual reproductive success, we assigned genetic paternity to 92.3% of 447 chicks sampled in seven years. Reproduction by non-alphas was rare and apparently reflected status misclassifications or opportunistic copulations en route to attaining alpha status rather than alternative mating strategies. On average 31% (range 7-44%, n=6 years) of the total I males was due to variance in reproductive success between alphas and non-alphas. Similarly, in a cohort of same-aged males followed for six years, 44-58% of the total I males was attributed to variance between males of different status. Thus, both intrasexual competition for status and female mate choice among lekking alpha males contribute substantially to the potential for sexual selection in this species.

  13. Sexual selection in a lekking bird: the relative opportunity for selection by female choice and male competition

    PubMed Central

    DuVal, Emily H; Kempenaers, Bart

    2008-01-01

    Leks are classic models for studies of sexual selection due to extreme variance in male reproductive success, but the relative influence of intrasexual competition and female mate choice in creating this skew is debatable. In the lekking lance-tailed manakin (Chiroxiphia lanceolata), these selective episodes are temporally separated into intrasexual competition for alpha status and female mate choice among alpha males that rarely interact. Variance in reproductive success between status classes of adult males (alpha versus non-alpha) can therefore be attributed to male–male competition whereas that within status largely reflects female mate choice. This provides an excellent opportunity for quantifying the relative contribution of each of these mechanisms of sexual selection to the overall opportunity for sexual selection on males (Imales). To calculate variance in actual reproductive success, we assigned genetic paternity to 92.3% of 447 chicks sampled in seven years. Reproduction by non-alphas was rare and apparently reflected status misclassifications or opportunistic copulations en route to attaining alpha status rather than alternative mating strategies. On average 31% (range 7–44%, n=6 years) of the total Imales was due to variance in reproductive success between alphas and non-alphas. Similarly, in a cohort of same-aged males followed for six years, 44–58% of the total Imales was attributed to variance between males of different status. Thus, both intrasexual competition for status and female mate choice among lekking alpha males contribute substantially to the potential for sexual selection in this species. PMID:18495620

  14. Green Synthesis and Catalytic Activity of Gold Nanoparticles Synthesized by Artemisia capillaris Water Extract

    NASA Astrophysics Data System (ADS)

    Lim, Soo Hyeon; Ahn, Eun-Young; Park, Youmie

    2016-10-01

    Gold nanoparticles were synthesized using a water extract of Artemisia capillaris (AC-AuNPs) under different extract concentrations, and their catalytic activity was evaluated in a 4-nitrophenol reduction reaction in the presence of sodium borohydride. The AC-AuNPs showed violet or wine colors with characteristic surface plasmon resonance bands at 534 543 nm that were dependent on the extract concentration. Spherical nanoparticles with an average size of 16.88 ± 5.47 29.93 ± 9.80 nm were observed by transmission electron microscopy. A blue shift in the maximum surface plasmon resonance was observed with increasing extract concentration. The face-centered cubic structure of AC-AuNPs was confirmed by high-resolution X-ray diffraction analysis. Based on phytochemical screening and Fourier transform infrared spectra, flavonoids, phenolic compounds, and amino acids present in the extract contributed to the reduction of Au ions to AC-AuNPs. The average size of the AC-AuNPs decreased as the extract concentration during the synthesis was increased. Higher 4-nitrophenol reduction reaction rate constants were observed for smaller sizes. The extract in the AC-AuNPs was removed by centrifugation to investigate the effect of the extract in the reduction reaction. Interestingly, the removal of extracts greatly enhanced their catalytic activity by up to 50.4 %. The proposed experimental method, which uses simple centrifugation, can be applied to other metallic nanoparticles that are green synthesized with plant extracts to enhance their catalytic activity.

  15. Highly Effective DNA Extraction Method for Nuclear Short Tandem Repeat Testing of Skeletal Remains from Mass Graves

    PubMed Central

    Davoren, Jon; Vanek, Daniel; Konjhodzić, Rijad; Crews, John; Huffine, Edwin; Parsons, Thomas J.

    2007-01-01

    Aim To quantitatively compare a silica extraction method with a commonly used phenol/chloroform extraction method for DNA analysis of specimens exhumed from mass graves. Methods DNA was extracted from twenty randomly chosen femur samples, using the International Commission on Missing Persons (ICMP) silica method, based on Qiagen Blood Maxi Kit, and compared with the DNA extracted by the standard phenol/chloroform-based method. The efficacy of extraction methods was compared by real time polymerase chain reaction (PCR) to measure DNA quantity and the presence of inhibitors and by amplification with the PowerPlex 16 (PP16) multiplex nuclear short tandem repeat (STR) kit. Results DNA quantification results showed that the silica-based method extracted on average 1.94 ng of DNA per gram of bone (range 0.25-9.58 ng/g), compared with only 0.68 ng/g by the organic method extracted (range 0.0016-4.4880 ng/g). Inhibition tests showed that there were on average significantly lower levels of PCR inhibitors in DNA isolated by the organic method. When amplified with PP16, all samples extracted by silica-based method produced 16 full loci profiles, while only 75% of the DNA extracts obtained by organic technique amplified 16 loci profiles. Conclusions The silica-based extraction method showed better results in nuclear STR typing from degraded bone samples than a commonly used phenol/chloroform method. PMID:17696302

  16. Average capacity of the ground to train communication link of a curved track in the turbulence of gamma-gamma distribution

    NASA Astrophysics Data System (ADS)

    Yang, Yanqiu; Yu, Lin; Zhang, Yixin

    2017-04-01

    A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.

  17. [A method for rapid extracting three-dimensional root model of vivo tooth from cone beam computed tomography data based on the anatomical characteristics of periodontal ligament].

    PubMed

    Zhao, Y J; Wang, S W; Liu, Y; Wang, Y

    2017-02-18

    To explore a new method for rapid extracting and rebuilding three-dimensional (3D) digital root model of vivo tooth from cone beam computed tomography (CBCT) data based on the anatomical characteristics of periodontal ligament, and to evaluate the extraction accuracy of the method. In the study, 15 extracted teeth (11 with single root, 4 with double roots) were collected from oral clinic and 3D digital root models of each tooth were obtained by 3D dental scanner with a high accuracy 0.02 mm in STL format. CBCT data for each patient were acquired before tooth extraction, DICOM data with a voxel size 0.3 mm were input to Mimics 18.0 software. Segmentation, Morphology operations, Boolean operations and Smart expanded function in Mimics software were used to edit teeth, bone and periodontal ligament threshold mask, and root threshold mask were automatically acquired after a series of mask operations. 3D digital root models were extracted in STL format finally. 3D morphology deviation between the extracted root models and corresponding vivo root models were compared in Geomagic Studio 2012 software. The 3D size errors in long axis, bucco-lingual direction and mesio-distal direction were also calculated. The average value of the 3D morphology deviation for 15 roots by calculating Root Mean Square (RMS) value was 0.22 mm, the average size errors in the mesio-distal direction, the bucco-lingual direction and the long axis were 0.46 mm, 0.36 mm and -0.68 mm separately. The average time of this new method for extracting single root was about 2-3 min. It could meet the accuracy requirement of the root 3D reconstruction fororal clinical use. This study established a new method for rapid extracting 3D root model of vivo tooth from CBCT data. It could simplify the traditional manual operation and improve the efficiency and automation of single root extraction. The strategy of this method for complete dentition extraction needs further research.

  18. Development of new composite biosorbents from olive pomace wastes

    NASA Astrophysics Data System (ADS)

    Pagnanelli, Francesca; Viggi, Carolina Cruz; Toro, Luigi

    2010-06-01

    In this study olive pomace was used as a source of binding substances for the development of composite biosorbents to be used in heavy metal removal from aqueous solutions. The aim was to obtain biosorbent material with an increased concentration of binding sites. The effects of two different extraction procedures (one using only methanol and the other one hexane followed by methanol) on the binding properties of olive pomace were tested by potentiometric titrations and batch biosorption tests for copper and cadmium removal. Titration modelling evidenced that both kinds of extractions generated a solid with a reduced amount of protonatable sites. Biosorption tests were organized according to full factorial designs. Analysis of variance denoted that both kinds of extractions determined a statistically significant negative effect on metal biosorption. In the case of cadmium extractions also determined a significant decrease of selectivity with respect to olive pomace. When the acid-base and binding properties of the substances extracted were determined, they were adsorbed onto a synthetic resin (octadecylsilane) and calcium alginate beads. In this way two kinds of composite biosorbents have been obtained both having an increased concentration of binding substances with respect to native olive pomace, also working more efficiently in metal removal.

  19. The segmentation of bones in pelvic CT images based on extraction of key frames.

    PubMed

    Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen

    2018-05-22

    Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.

  20. Development of a comprehensive screening method for more than 300 organic chemicals in water samples using a combination of solid-phase extraction and liquid chromatography-time-of-flight-mass spectrometry.

    PubMed

    Chau, Hong Thi Cam; Kadokami, Kiwao; Ifuku, Tomomi; Yoshida, Yusuke

    2017-12-01

    A comprehensive screening method for 311 organic compounds with a wide range of physicochemical properties (log Pow -2.2-8.53) in water samples was developed by combining solid-phase extraction with liquid chromatography-high-resolution time-of-flight mass spectrometry. Method optimization using 128 pesticides revealed that tandem extraction with styrene-divinylbenzene polymer and activated carbon solid-phase extraction cartridges at pH 7.0 was optimal. The developed screening method was able to extract 190 model compounds with average recovery of 80.8% and average relative standard deviations (RSD) of 13.5% from spiked reagent water at 0.20 μg L -1 , and 87.1% recovery and 10.8% RSD at 0.05 μg L -1 . Spike-recovery testing (0.20 μg L -1 ) using real sewage treatment plant effluents resulted in an average recovery and average RSD of 190 model compounds of 77.4 and 13.1%, respectively. The method was applied to the influent and effluent of five sewage treatment plants in Kitakyushu, Japan, with 29 out of 311 analytes being observed at least once. The results showed that this method can screen for a large number of chemicals with a wide range of physicochemical properties quickly and at low operational cost, something that is difficult to achieve using conventional analytical methods. This method will find utility in target screening of hazardous chemicals with a high risk in environmental waters, and for confirming the safety of water after environmental incidents.

Top