Sample records for finite sample bias

  1. Assessing Compliance-Effect Bias in the Two Stage Least Squares Estimator

    ERIC Educational Resources Information Center

    Reardon, Sean; Unlu, Fatih; Zhu, Pei; Bloom, Howard

    2011-01-01

    The proposed paper studies the bias in the two-stage least squares, or 2SLS, estimator that is caused by the compliance-effect covariance (hereafter, the compliance-effect bias). It starts by deriving the formula for the bias in an infinite sample (i.e., in the absence of finite sample bias) under different circumstances. Specifically, it…

  2. Data-Adaptive Bias-Reduced Doubly Robust Estimation.

    PubMed

    Vermeulen, Karel; Vansteelandt, Stijn

    2016-05-01

    Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.

  3. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966

  4. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    NASA Astrophysics Data System (ADS)

    Cao, Youfang; Liang, Jie

    2013-07-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  5. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    PubMed

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  6. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  7. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    PubMed

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  8. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  9. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  10. A New Source Biasing Approach in ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevill, Aaron M; Mosher, Scott W

    2012-01-01

    The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less

  11. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  12. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  13. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  14. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  15. A Design of Finite Memory Residual Generation Filter for Sensor Fault Detection

    NASA Astrophysics Data System (ADS)

    Kim, Pyung Soo

    2017-04-01

    In the current paper, a residual generation filter with finite memory structure is proposed for sensor fault detection. The proposed finite memory residual generation filter provides the residual by real-time filtering of fault vector using only the most recent finite measurements and inputs on the window. It is shown that the residual given by the proposed residual generation filter provides the exact fault for noisefree systems. The proposed residual generation filter is specified to the digital filter structure for the amenability to hardware implementation. Finally, to illustrate the capability of the proposed residual generation filter, extensive simulations are performed for the discretized DC motor system with two types of sensor faults, incipient soft bias-type fault and abrupt bias-type fault. In particular, according to diverse noise levels and windows lengths, meaningful simulation results are given for the abrupt bias-type fault.

  16. Overrepresentation of extreme events in decision making reflects rational use of cognitive resources.

    PubMed

    Lieder, Falk; Griffiths, Thomas L; Hsu, Ming

    2018-01-01

    People's decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision makers should overrepresent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account, we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people's availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Extreme Quantum Memory Advantage for Rare-Event Sampling

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  18. Classifier performance prediction for computer-aided diagnosis using a limited dataset.

    PubMed

    Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir

    2008-04-01

    In a practical classifier design problem, the true population is generally unknown and the available sample is finite-sized. A common approach is to use a resampling technique to estimate the performance of the classifier that will be trained with the available sample. We conducted a Monte Carlo simulation study to compare the ability of the different resampling techniques in training the classifier and predicting its performance under the constraint of a finite-sized sample. The true population for the two classes was assumed to be multivariate normal distributions with known covariance matrices. Finite sets of sample vectors were drawn from the population. The true performance of the classifier is defined as the area under the receiver operating characteristic curve (AUC) when the classifier designed with the specific sample is applied to the true population. We investigated methods based on the Fukunaga-Hayes and the leave-one-out techniques, as well as three different types of bootstrap methods, namely, the ordinary, 0.632, and 0.632+ bootstrap. The Fisher's linear discriminant analysis was used as the classifier. The dimensionality of the feature space was varied from 3 to 15. The sample size n2 from the positive class was varied between 25 and 60, while the number of cases from the negative class was either equal to n2 or 3n2. Each experiment was performed with an independent dataset randomly drawn from the true population. Using a total of 1000 experiments for each simulation condition, we compared the bias, the variance, and the root-mean-squared error (RMSE) of the AUC estimated using the different resampling techniques relative to the true AUC (obtained from training on a finite dataset and testing on the population). Our results indicated that, under the study conditions, there can be a large difference in the RMSE obtained using different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Under this type of conditions, the 0.632 and 0.632+ bootstrap methods have the lowest RMSE, indicating that the difference between the estimated and the true performances obtained using the 0.632 and 0.632+ bootstrap will be statistically smaller than those obtained using the other three resampling methods. Of the three bootstrap methods, the 0.632+ bootstrap provides the lowest bias. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited dataset.

  19. Proportional hazards model with varying coefficients for length-biased data.

    PubMed

    Zhang, Feipeng; Chen, Xuerong; Zhou, Yong

    2014-01-01

    Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.

  20. The empirical Bayes estimators of fine-scale population structure in high gene flow species.

    PubMed

    Kitada, Shuichi; Nakamichi, Reiichiro; Kishino, Hirohisa

    2017-11-01

    An empirical Bayes (EB) pairwise F ST estimator was previously introduced and evaluated for its performance by numerical simulation. In this study, we conducted coalescent simulations and generated genetic population structure mechanistically, and compared the performance of the EBF ST with Nei's G ST , Nei and Chesser's bias-corrected G ST (G ST_NC ), Weir and Cockerham's θ (θ WC ) and θ with finite sample correction (θ WC_F ). We also introduced EB estimators for Hedrick' G' ST and Jost' D. We applied these estimators to publicly available SNP genotypes of Atlantic herring. We also examined the power to detect the environmental factors causing the population structure. Our coalescent simulations revealed that the finite sample correction of θ WC is necessary to assess population structure using pairwise F ST values. For microsatellite markers, EBF ST performed the best among the present estimators regarding both bias and precision under high gene flow scenarios (FST≤0.032). For 300 SNPs, EBF ST had the highest precision in all cases, but the bias was negative and greater than those for G ST_NC and θ WC_F in all cases. G ST_NC and θ WC_F performed very similarly at all levels of F ST . As the number of loci increased up to 10 000, the precision of G ST_NC and θ WC_F became slightly better than for EBF ST for cases with FST≥0.004, even though the size of the bias remained constant. The EB estimators described the fine-scale population structure of the herring and revealed that ~56% of the genetic differentiation was caused by sea surface temperature and salinity. The R package finepop for implementing all estimators used here is available on CRAN. © 2017 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  1. Effective dimension reduction for sparse functional data

    PubMed Central

    YAO, F.; LEI, E.; WU, Y.

    2015-01-01

    Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293

  2. Cavity-coupled double-quantum dot at finite bias: Analogy with lasers and beyond

    NASA Astrophysics Data System (ADS)

    Kulkarni, Manas; Cotlet, Ovidiu; Türeci, Hakan E.

    2014-09-01

    We present a theoretical and experimental study of photonic and electronic transport properties of a voltage biased InAs semiconductor double quantum dot (DQD) that is dipole coupled to a superconducting transmission line resonator. We obtain the master equation for the reduced density matrix of the coupled system of cavity photons and DQD electrons accounting systematically for both the presence of phonons and the effect of leads at finite voltage bias. We subsequently derive analytical expressions for transmission, phase response, photon number, and the nonequilibrium steady-state electron current. We show that the coupled system under finite bias realizes an unconventional version of a single-atom laser and analyze the spectrum and the statistics of the photon flux leaving the cavity. In the transmission mode, the system behaves as a saturable single-atom amplifier for the incoming photon flux. Finally, we show that the back action of the photon emission on the steady-state current can be substantial. Our analytical results are compared to exact master equation results establishing regimes of validity of various analytical models. We compare our findings to available experimental measurements.

  3. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    PubMed

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  4. Impurity effects on electrical conductivity of doped bilayer graphene in the presence of a bias voltage

    NASA Astrophysics Data System (ADS)

    E, Lotfi; H, Rezania; B, Arghavaninia; M, Yarmohammadi

    2016-07-01

    We address the electrical conductivity of bilayer graphene as a function of temperature, impurity concentration, and scattering strength in the presence of a finite bias voltage at finite doping, beginning with a description of the tight-binding model using the linear response theory and Green’s function approach. Our results show a linear behavior at high doping for the case of high bias voltage. The effects of electron doping on the electrical conductivity have been studied via changing the electronic chemical potential. We also discuss and analyze how the bias voltage affects the temperature behavior of the electrical conductivity. Finally, we study the behavior of the electrical conductivity as a function of the impurity concentration and scattering strength for different bias voltages and chemical potentials respectively. The electrical conductivity is found to be monotonically decreasing with impurity scattering strength due to the increased scattering among electrons at higher impurity scattering strength.

  5. On the importance of incorporating sampling weights in ...

    EPA Pesticide Factsheets

    Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h

  6. Innovative Liner Concepts: Experiments and Impedance Modeling of Liners Including the Effect of Bias Flow

    NASA Technical Reports Server (NTRS)

    Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris

    2000-01-01

    The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).

  7. Nature of magnetization and lateral spin-orbit interaction in gated semiconductor nanowires.

    PubMed

    Karlsson, H; Yakimenko, I I; Berggren, K-F

    2018-05-31

    Semiconductor nanowires are interesting candidates for realization of spintronics devices. In this paper we study electronic states and effects of lateral spin-orbit coupling (LSOC) in a one-dimensional asymmetrically biased nanowire using the Hartree-Fock method with Dirac interaction. We have shown that spin polarization can be triggered by LSOC at finite source-drain bias,as a result of numerical noise representing a random magnetic field due to wiring or a random background magnetic field by Earth magnetic field, for instance. The electrons spontaneously arrange into spin rows in the wire due to electron interactions leading to a finite spin polarization. The direction of polarization is, however, random at zero source-drain bias. We have found that LSOC has an effect on orientation of spin rows only in the case when source-drain bias is applied.

  8. Nature of magnetization and lateral spin–orbit interaction in gated semiconductor nanowires

    NASA Astrophysics Data System (ADS)

    Karlsson, H.; Yakimenko, I. I.; Berggren, K.-F.

    2018-05-01

    Semiconductor nanowires are interesting candidates for realization of spintronics devices. In this paper we study electronic states and effects of lateral spin–orbit coupling (LSOC) in a one-dimensional asymmetrically biased nanowire using the Hartree–Fock method with Dirac interaction. We have shown that spin polarization can be triggered by LSOC at finite source-drain bias,as a result of numerical noise representing a random magnetic field due to wiring or a random background magnetic field by Earth magnetic field, for instance. The electrons spontaneously arrange into spin rows in the wire due to electron interactions leading to a finite spin polarization. The direction of polarization is, however, random at zero source-drain bias. We have found that LSOC has an effect on orientation of spin rows only in the case when source-drain bias is applied.

  9. Gini estimation under infinite variance

    NASA Astrophysics Data System (ADS)

    Fontanari, Andrea; Taleb, Nassim Nicholas; Cirillo, Pasquale

    2018-07-01

    We study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α ∈(1 , 2)). We show that, in such a case, the Gini coefficient cannot be reliably estimated using conventional nonparametric methods, because of a downward bias that emerges under fat tails. This has important implications for the ongoing discussion about economic inequality. We start by discussing how the nonparametric estimator of the Gini index undergoes a phase transition in the symmetry structure of its asymptotic distribution, as the data distribution shifts from the domain of attraction of a light-tailed distribution to that of a fat-tailed one, especially in the case of infinite variance. We also show how the nonparametric Gini bias increases with lower values of α. We then prove that maximum likelihood estimation outperforms nonparametric methods, requiring a much smaller sample size to reach efficiency. Finally, for fat-tailed data, we provide a simple correction mechanism to the small sample bias of the nonparametric estimator based on the distance between the mode and the mean of its asymptotic distribution.

  10. Nontrivial transition of transmission in a highly open quantum point contact in the quantum Hall regime

    NASA Astrophysics Data System (ADS)

    Hong, Changki; Park, Jinhong; Chung, Yunchul; Choi, Hyungkook; Umansky, Vladimir

    2017-11-01

    Transmission through a quantum point contact (QPC) in the quantum Hall regime usually exhibits multiple resonances as a function of gate voltage and high nonlinearity in bias. Such behavior is unpredictable and changes sample by sample. Here, we report the observation of a sharp transition of the transmission through an open QPC at finite bias, which was observed consistently for all the tested QPCs. It is found that the bias dependence of the transition can be fitted to the Fermi-Dirac distribution function through universal scaling. The fitted temperature matches quite nicely to the electron temperature measured via shot-noise thermometry. While the origin of the transition is unclear, we propose a phenomenological model based on our experimental results that may help to understand such a sharp transition. Similar transitions are observed in the fractional quantum Hall regime, and it is found that the temperature of the system can be measured by rescaling the quasiparticle energy with the effective charge (e*=e /3 ). We believe that the observed phenomena can be exploited as a tool for measuring the electron temperature of the system and for studying the quasiparticle charges of the fractional quantum Hall states.

  11. Unifying quantum heat transfer in a nonequilibrium spin-boson model with full counting statistics

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Ren, Jie; Cao, Jianshu

    2017-02-01

    To study the full counting statistics of quantum heat transfer in a driven nonequilibrium spin-boson model, we develop a generalized nonequilibrium polaron-transformed Redfield equation with an auxiliary counting field. This enables us to study the impact of qubit-bath coupling ranging from weak to strong regimes. Without external modulations, we observe maximal values of both steady-state heat flux and noise power in moderate coupling regimes, below which we find that these two transport quantities are enhanced by the finite-qubit-energy bias. With external modulations, the geometric-phase-induced heat flux shows a monotonic decrease upon increasing the qubit-bath coupling at zero qubit energy bias (without bias). While under the finite-qubit-energy bias (with bias), the geometric-phase-induced heat flux exhibits an interesting reversal behavior in the strong coupling regime. Our results unify the seemingly contradictory results in weak and strong qubit-bath coupling regimes and provide detailed dissections for the quantum fluctuation of nonequilibrium heat transfer.

  12. Estimating interevent time distributions from finite observation periods in communication networks

    NASA Astrophysics Data System (ADS)

    Kivelä, Mikko; Porter, Mason A.

    2015-11-01

    A diverse variety of processes—including recurrent disease episodes, neuron firing, and communication patterns among humans—can be described using interevent time (IET) distributions. Many such processes are ongoing, although event sequences are only available during a finite observation window. Because the observation time window is more likely to begin or end during long IETs than during short ones, the analysis of such data is susceptible to a bias induced by the finite observation period. In this paper, we illustrate how this length bias is born and how it can be corrected without assuming any particular shape for the IET distribution. To do this, we model event sequences using stationary renewal processes, and we formulate simple heuristics for determining the severity of the bias. To illustrate our results, we focus on the example of empirical communication networks, which are temporal networks that are constructed from communication events. The IET distributions of such systems guide efforts to build models of human behavior, and the variance of IETs is very important for estimating the spreading rate of information in networks of temporal interactions. We analyze several well-known data sets from the literature, and we find that the resulting bias can lead to systematic underestimates of the variance in the IET distributions and that correcting for the bias can lead to qualitatively different results for the tails of the IET distributions.

  13. Chiral tunneling modulated by a time-periodic potential on the surface states of a topological insulator

    PubMed Central

    Li, Yuan; Jalil, Mansoor B. A.; Tan, S. G.; Zhao, W.; Bai, R.; Zhou, G. H.

    2014-01-01

    Time-periodic perturbation can be used to modify the transport properties of the surface states of topological insulators, specifically their chiral tunneling property. Using the scattering matrix method, we study the tunneling transmission of the surface states of a topological insulator under the influence of a time-dependent potential and finite gate bias voltage. It is found that perfect transmission is obtained for electrons which are injected normally into the time-periodic potential region in the absence of any bias voltage. However, this signature of Klein tunneling is destroyed when a bias voltage is applied, with the transmission probability of normally incident electrons decreasing with increasing gate bias voltage. Likewise, the overall conductance of the system decreases significantly when a gate bias voltage is applied. The characteristic left-handed helicity of the transmitted spin polarization is also broken by the finite gate bias voltage. In addition, the time-dependent potential modifies the large-angle transmission profile, which exhibits an oscillatory or resonance-like behavior. Finally, time-dependent transport modes (with oscillating potential in the THz frequency) can result in enhanced overall conductance, irrespective of the presence or absence of the gate bias voltage. PMID:24713634

  14. Steady-State Density Functional Theory for Finite Bias Conductances.

    PubMed

    Stefanucci, G; Kurth, S

    2015-12-09

    In the framework of density functional theory, a formalism to describe electronic transport in the steady state is proposed which uses the density on the junction and the steady current as basic variables. We prove that, in a finite window around zero bias, there is a one-to-one map between the basic variables and both local potential on as well as bias across the junction. The resulting Kohn-Sham system features two exchange-correlation (xc) potentials, a local xc potential, and an xc contribution to the bias. For weakly coupled junctions the xc potentials exhibit steps in the density-current plane which are shown to be crucial to describe the Coulomb blockade diamonds. At small currents these steps emerge as the equilibrium xc discontinuity bifurcates. The formalism is applied to a model benzene junction, finding perfect agreement with the orthodox theory of Coulomb blockade.

  15. Validation of the Aura Microwave Limb Sounder Temperature and Geopotential Height Measurements

    NASA Technical Reports Server (NTRS)

    Schwartz, M. J.; Lambert, A.; Manney, G. L.; Read, W. G.; Livesey, N. J.; Froidevaux, L.; Ao, C. O.; Bernath, P. F.; Boone, C. D.; Cofield, R. E.; hide

    2007-01-01

    This paper describes the retrievals algorithm used to determine temperature and height from radiance measurements by the Microwave Limb Sounder on EOS Aura. MLS is a "limbscanning" instrument, meaning that it views the atmosphere along paths that do not intersect the surface - it actually looks forwards from the Aura satellite. This means that the temperature retrievals are for a "profile" of the atmosphere somewhat ahead of the satellite. Because of the need to view a finite sample of the atmosphere, the sample spans a box about 1.5km deep and several tens of kilometers in width; the optical characteristics of the atmosphere mean that the sample is representative of a tube about 200-300km long in the direction of view. The retrievals use temperature analyses from NASA's Goddard Earth Observing System, Version 5 (GEOS-5) data assimilation system as a priori states. The temperature retrievals are somewhat deperrdezt on these a priori states, especially in the lower stratosphere. An important part of the validation of any new dataset involves comparison with other, independent datasets. A large part of this study is concerned with such comparisons, using a number of independent space-based measurements obtained using different techniques, and with meteorological analyses. The MLS temperature data are shown to have biases that vary with height, but also depend on the validation dataset. MLS data are apparently biased slightly cold relative to correlative data in the upper troposphere and slightly warm in the middle stratosphere. A warm MLS bias in the upper stratosphere may be due to a cold bias in GEOS-5 temperatures.

  16. Finite-size effects and magnetic exchange coupling in thin CoO layers

    NASA Astrophysics Data System (ADS)

    Ambrose, Thomas Francis

    Finite size effects in CoO have been observed in CoO/SiOsb2 multilayers. The Neel temperatures of the CoO layers, as determined by dc susceptibility measurements, follow a finite-size scaling relation with a shift exponent lambda = 1.55 ± 0.05. This determined exponent is close to the theoretical value for finite size scaling in an Ising system. The value of the zero temperature correlation length has also been determined to be 18A, while antiferromagnetic ordering persists down to a CoO layer thickness of 10A. The properties of exchange biasing have been extensively studied in NiFe/CoO bilayers. The effects of the cooling field (Hsb{FC}), up to 50 kOe, on the resultant exchange field (Hsb{E}) and coercivity (Hsb{C}) have been examined. The value of Hsb{E} increases rapidly at low cooling fields (Hsb{FC} < 1kOe) and levels off for Hsb{FC} larger than 4 kOe. The value of Hsb{C} also depends upon Hsb{FC}, but less sensitively. The bilayer thickness also influences exchange biasing. We find that Hsb{E} varies inversely proprotional to both tsb{FM} and tsb{AF} where tsb{FM} and tsb{AF} are the ferromagnetic and antiferromagnetic layer thickness respectively. Because of the 1/tsb{AF}, the simple picture of interfacial coupling between ferromagnet and antiferromagnet spins appears to be inadequate. The assertion of long range coupling between ferromagnetic and antiferromagnetic layers has been verified by the observation of antiferromagnetic exchange coupling across spacer layers in NiFe/NM/CoO trilayers, where NM is a non-magnetic material. Exchange biasing has been observed in trilayers with metallic spacer layers up to 50A thick using Ag, Cu and Au, while no exchange field was observed for insulating spacer layers of any thickness using Alsb2Osb3, SiOsb2 and MgO. The temperature dependence of Hsb{E} and Hsb{C} and the effect of the deposition order have been studied in a series of bilayer (NiFe/CoO and CoO/NiFe) and trilayer (NiFe/CoO/NiFe) films. A profound difference in Hsb{E} was observed in samples with NiFe deposited on top of CoO compared to samples with CoO deposited on top of NiFe. When CoO is on top of NiFe Hsb{E} varies linearly with temperature, while for samples with NiFe on top of CoO Hsb{E} has a plateau followed by a rapid decrease. These distinct temperature dependences have been reproduced in NiFe/CoO/NiFe trilayers which contain both geometries. Structural analysis using Transmission Electron Microscopy indicate no apparent differences in the top and bottom interfaces. The angular dependence of the exchange coupling in a NiFe/CoO bilayer has been measured. Both Hsb{E} and Hsb{C} with unidirectional and uniaxial characteristics, respectively, are integral parts of the exchange coupling. The values of Hsb{E} can be expressed by a series of odd angle cosine terms, while the values of Hsb{C} can be expressed by a series of even angle cosine terms. Finally, exchange biasing has been used to "spin engineer" ferromagnetic layers in NiFe/CoO/NiFe trilayers. Four different spin structures have been observed. A phase diagram, for the four spin structures and the conditions with which each spin structure is obtained, has been determined. (Abstract shortened by UMI.)

  17. A New Method for Calculating Counts in Cells

    NASA Astrophysics Data System (ADS)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  18. Managing distance and covariate information with point-based clustering.

    PubMed

    Whigham, Peter A; de Graaf, Brandon; Srivastava, Rashmi; Glue, Paul

    2016-09-01

    Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley's K and applied to the problem of clustering with deliberate self-harm (DSH), is presented. Point-based Monte-Carlo simulation of Ripley's K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years' emergency hospital presentations (n = 136) in a New Zealand town (population ~50,000). Study area was defined by residential (housing) land parcels representing a finite set of possible point addresses. Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley's K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for covariate measures that exhibit spatial clustering, such as deprivation, are crucial when assessing point-based clustering.

  19. United States Air Force Graduate Student Research Program. Program Management Report

    DTIC Science & Technology

    1988-12-01

    PRELIMINARY STRUCTURAL DESIGN/OPTIMIZATION by Richard A. Swift ABSTRACT Finite element analysis for use in structural design has advanced to the point where...Plates Subjected Gregory Schoeppner to Low Velocity Impact *** Same Report as Prof. William Wolfe * 57 Finite Element Analysis for Preliminary Richard...and dynamic load conditions using both radial and bias- ply tires. A detailed three-dimensional finite - element model of the wheel was generated for

  20. Topology of Large-Scale Structures of Galaxies in two Dimensions—Systematic Effects

    NASA Astrophysics Data System (ADS)

    Appleby, Stephen; Park, Changbom; Hong, Sungwook E.; Kim, Juhan

    2017-02-01

    We study the two-dimensional topology of galactic distribution when projected onto two-dimensional spherical shells. Using the latest Horizon Run 4 simulation data, we construct the genus of the two-dimensional field and consider how this statistic is affected by late-time nonlinear effects—principally gravitational collapse and redshift space distortion (RSD). We also consider systematic and numerical artifacts, such as shot noise, galaxy bias, and finite pixel effects. We model the systematics using a Hermite polynomial expansion and perform a comprehensive analysis of known effects on the two-dimensional genus, with a view toward using the statistic for cosmological parameter estimation. We find that the finite pixel effect is dominated by an amplitude drop and can be made less than 1% by adopting pixels smaller than 1/3 of the angular smoothing length. Nonlinear gravitational evolution introduces time-dependent coefficients of the zeroth, first, and second Hermite polynomials, but the genus amplitude changes by less than 1% between z = 1 and z = 0 for smoothing scales {R}{{G}}> 9 {Mpc}/{{h}}. Non-zero terms are measured up to third order in the Hermite polynomial expansion when studying RSD. Differences in the shapes of the genus curves in real and redshift space are small when we adopt thick redshift shells, but the amplitude change remains a significant ˜ { O }(10 % ) effect. The combined effects of galaxy biasing and shot noise produce systematic effects up to the second Hermite polynomial. It is shown that, when sampling, the use of galaxy mass cuts significantly reduces the effect of shot noise relative to random sampling.

  1. Effect of spatial bias on the nonequilibrium phase transition in a system of coagulating and fragmenting particles.

    PubMed

    Rajesh, R; Krishnamurthy, Supriya

    2002-10-01

    We examine the effect of spatial bias on a nonequilibrium system in which masses on a lattice evolve through the elementary moves of diffusion, coagulation, and fragmentation. When there is no preferred directionality in the motion of the masses, the model is known to exhibit a nonequilibrium phase transition between two different types of steady state, in all dimensions. We show analytically that introducing a preferred direction in the motion of the masses inhibits the occurrence of the phase transition in one dimension, in the thermodynamic limit. A finite-size system, however, continues to show a signature of the original transition, and we characterize the finite-size scaling implications of this. Our analysis is supported by numerical simulations. In two dimensions, bias is shown to be irrelevant.

  2. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    PubMed

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Propagation Characteristics of Finite Ground Coplanar Waveguide on Si Substrates With Porous Si and Polyimide Interface Layers

    NASA Technical Reports Server (NTRS)

    Ponchak, George E.; Itotia, Isaac K.; Drayton, Rhonda Franklin

    2003-01-01

    Measured and modeled propagation characteristics of Finite Ground Coplanar (FGC) waveguide fabricated on a 15 ohm-cm Si substrate with a 23 micron thick, 68% porous Si layer and a 20 micron thick polyimide interface layer are presented for the first time. Attenuation and effective permittivity as function of the FGC geometry and the bias between the center conductor and the ground planes are presented. It is shown that the porous Si reduces the attenuation by 1 dB/cm compared to FGC lines with only polyimide interface layers, and the polyimide on porous silicon demonstrates negligible bias dependence.

  4. Effect of driving voltages in dual capacitively coupled radio frequency plasma: A study by nonlinear global model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bora, B., E-mail: bbora@cchen.cl

    2015-10-15

    On the basis of nonlinear global model, a dual frequency capacitively coupled radio frequency plasma driven by 13.56 MHz and 27.12 MHz has been studied to investigate the influences of driving voltages on the generation of dc self-bias and plasma heating. Fluid equations for the ions inside the plasma sheath have been considered to determine the voltage-charge relations of the plasma sheath. Geometrically symmetric as well as asymmetric cases with finite geometrical asymmetry of 1.2 (ratio of electrodes area) have been considered to make the study more reasonable to experiment. The electrical asymmetry effect (EAE) and finite geometrical asymmetry is found tomore » work differently in controlling the dc self-bias. The amount of EAE has been primarily controlled by the phase angle between the two consecutive harmonics waveforms. The incorporation of the finite geometrical asymmetry in the calculations shift the dc self-bias towards negative polarity direction while increasing the amount of EAE is found to increase the dc self-bias in either direction. For phase angle between the two waveforms ϕ = 0 and ϕ = π/2, the amount of EAE increases significantly with increasing the low frequency voltage, whereas no such increase in the amount of EAE is found with increasing high frequency voltage. In contrast to the geometrically symmetric case, where the variation of the dc self-bias with driving voltages for phase angle ϕ = 0 and π/2 are just opposite in polarity, the variation for the geometrically asymmetric case is different for ϕ = 0 and π/2. In asymmetric case, for ϕ = 0, the dc self-bias increases towards the negative direction with increasing both the low and high frequency voltages, but for the ϕ = π/2, the dc-self bias is increased towards positive direction with increasing low frequency voltage while dc self-bias increases towards negative direction with increasing high frequency voltage.« less

  5. Technical note: Alternatives to reduce adipose tissue sampling bias.

    PubMed

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  6. Photocurrent generation in a metallic transition-metal dichalcogenide

    NASA Astrophysics Data System (ADS)

    Mehmood, Naveed; Rasouli, Hamid Reza; ćakıroǧlu, Onur; Kasırga, T. Serkan

    2018-05-01

    Photocurrent generation is unexpected in metallic 2D layered materials unless a photothermal mechanism is prevalent. Yet, typical high thermal conductivity and low absorption of the visible spectrum prevent photothermal current generation in metals. Here, we report photoresponse from two-terminal devices of mechanically exfoliated metallic 3R-Nb S2 thin crystals using scanning photocurrent microscopy (SPCM) both at zero and finite bias. SPCM measurements reveal that the photocurrent predominantly emerges from metal/Nb S2 junctions of the two-terminal device at zero bias. At finite biases, along with the photocurrent generated at metal/Nb S2 junctions, now a negative photoresponse from all over the Nb S2 crystal is evident. Among our results, we realized that the observed photocurrent can be explained by the local heating caused by the laser excitation. These findings show that Nb S2 is among a few metallic materials in which photocurrent generation is possible.

  7. Empirical evidence for resource-rational anchoring and adjustment.

    PubMed

    Lieder, Falk; Griffiths, Thomas L; M Huys, Quentin J; Goodman, Noah D

    2018-04-01

    People's estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people's rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people's knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.

  8. Influence of the properties of soft collective spin wave modes on the magnetization reversal in finite arrays of dipolarly coupled magnetic dots

    NASA Astrophysics Data System (ADS)

    Stebliy, Maxim; Ognev, Alexey; Samardak, Alexander; Chebotkevich, Ludmila; Verba, Roman; Melkov, Gennadiy; Tiberkevich, Vasil; Slavin, Andrei

    2015-06-01

    Magnetization reversal in finite chains and square arrays of closely packed cylindrical magnetic dots, having vortex ground state in the absence of the external bias field, has been studied experimentally by measuring static hysteresis loops, and also analyzed theoretically. It has been shown that the field Bn of a vortex nucleation in a dot as a function of the finite number N of dots in the array's side may exhibit a monotonic or an oscillatory behavior depending on the array geometry and the direction of the external bias magnetic field. The oscillations in the dependence Bn(N) are shown to be caused by the quantization of the collective soft spin wave mode, which corresponds to the vortex nucleation in a finite array of dots. These oscillations are directly related to the form and symmetry of the dispersion law of the soft SW mode: the oscillation could appear only if the minimum of the soft mode spectrum is not located at any of the symmetric points inside the first Brillouin zone of the array's lattice. Thus, the purely static measurements of the hysteresis loops in finite arrays of coupled magnetic dots can yield important information about the properties of the collective spin wave excitations in these arrays.

  9. Is the permeability of naturally fractured rocks scale dependent?

    NASA Astrophysics Data System (ADS)

    Azizmohammadi, Siroos; Matthäi, Stephan K.

    2017-09-01

    The equivalent permeability, keq of stratified fractured porous rocks and its anisotropy is important for hydrocarbon reservoir engineering, groundwater hydrology, and subsurface contaminant transport. However, it is difficult to constrain this tensor property as it is strongly influenced by infrequent large fractures. Boreholes miss them and their directional sampling bias affects the collected geostatistical data. Samples taken at any scale smaller than that of interest truncate distributions and this bias leads to an incorrect characterization and property upscaling. To better understand this sampling problem, we have investigated a collection of outcrop-data-based Discrete Fracture and Matrix (DFM) models with mechanically constrained fracture aperture distributions, trying to establish a useful Representative Elementary Volume (REV). Finite-element analysis and flow-based upscaling have been used to determine keq eigenvalues and anisotropy. While our results indicate a convergence toward a scale-invariant keq REV with increasing sample size, keq magnitude can have multi-modal distributions. REV size relates to the length of dilated fracture segments as opposed to overall fracture length. Tensor orientation and degree of anisotropy also converge with sample size. However, the REV for keq anisotropy is larger than that for keq magnitude. Across scales, tensor orientation varies spatially, reflecting inhomogeneity of the fracture patterns. Inhomogeneity is particularly pronounced where the ambient stress selectively activates late- as opposed to early (through-going) fractures. While we cannot detect any increase of keq with sample size as postulated in some earlier studies, our results highlight a strong keq anisotropy that influences scale dependence.

  10. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  11. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  12. Stabilized Finite Elements in FUN3D

    NASA Technical Reports Server (NTRS)

    Anderson, W. Kyle; Newman, James C.; Karman, Steve L.

    2017-01-01

    A Streamlined Upwind Petrov-Galerkin (SUPG) stabilized finite-element discretization has been implemented as a library into the FUN3D unstructured-grid flow solver. Motivation for the selection of this methodology is given, details of the implementation are provided, and the discretization for the interior scheme is verified for linear and quadratic elements by using the method of manufactured solutions. A methodology is also described for capturing shocks, and simulation results are compared to the finite-volume formulation that is currently the primary method employed for routine engineering applications. The finite-element methodology is demonstrated to be more accurate than the finite-volume technology, particularly on tetrahedral meshes where the solutions obtained using the finite-volume scheme can suffer from adverse effects caused by bias in the grid. Although no effort has been made to date to optimize computational efficiency, the finite-element scheme is competitive with the finite-volume scheme in terms of computer time to reach convergence.

  13. SEMIPARAMETRIC ADDITIVE RISKS REGRESSION FOR TWO-STAGE DESIGN SURVIVAL STUDIES

    PubMed Central

    Li, Gang; Wu, Tong Tong

    2011-01-01

    In this article we study a semiparametric additive risks model (McKeague and Sasieni (1994)) for two-stage design survival data where accurate information is available only on second stage subjects, a subset of the first stage study. We derive two-stage estimators by combining data from both stages. Large sample inferences are developed. As a by-product, we also obtain asymptotic properties of the single stage estimators of McKeague and Sasieni (1994) when the semiparametric additive risks model is misspecified. The proposed two-stage estimators are shown to be asymptotically more efficient than the second stage estimators. They also demonstrate smaller bias and variance for finite samples. The developed methods are illustrated using small intestine cancer data from the SEER (Surveillance, Epidemiology, and End Results) Program. PMID:21931467

  14. Biased decoy-state measurement-device-independent quantum cryptographic conferencing with finite resources.

    PubMed

    Chen, RuiKe; Bao, WanSu; Zhou, Chun; Li, Hongwei; Wang, Yang; Bao, HaiZe

    2016-03-21

    In recent years, a large quantity of work have been done to narrow the gap between theory and practice in quantum key distribution (QKD). However, most of them are focus on two-party protocols. Very recently, Yao Fu et al proposed a measurement-device-independent quantum cryptographic conferencing (MDI-QCC) protocol and proved its security in the limit of infinitely long keys. As a step towards practical application for MDI-QCC, we design a biased decoy-state measurement-device-independent quantum cryptographic conferencing protocol and analyze the performance of the protocol in both the finite-key and infinite-key regime. From numerical simulations, we show that our decoy-state analysis is tighter than Yao Fu et al. That is, we can achieve the nonzero asymptotic secret key rate in long distance with approximate to 200km and we also demonstrate that with a finite size of data (say 1011 to 1013 signals) it is possible to perform secure MDI-QCC over reasonable distances.

  15. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  16. The effects of assembly bias on the inference of matter clustering from galaxy-galaxy lensing and galaxy clustering

    NASA Astrophysics Data System (ADS)

    McEwen, Joseph E.; Weinberg, David H.

    2018-07-01

    The combination of galaxy-galaxy lensing and galaxy clustering is a promising route to measuring the amplitude of matter clustering and testing modified gravity theories of cosmic acceleration. Halo occupation distribution (HOD) modelling can extend the approach down to non-linear scales, but galaxy assembly bias could introduce systematic errors by causing the HOD to vary with the large-scale environment at fixed halo mass. We investigate this problem using the mock galaxy catalogs created by Hearin & Watson (2013, HW13), which exhibit significant assembly bias because galaxy luminosity is tied to halo peak circular velocity and galaxy colour is tied to halo formation time. The preferential placement of galaxies (especially red galaxies) in older haloes affects the cutoff of the mean occupation function ⟨Ncen(Mmin)⟩ for central galaxies, with haloes in overdense regions more likely to host galaxies. The effect of assembly bias on the satellite galaxy HOD is minimal. We introduce an extended, environment-dependent HOD (EDHOD) prescription to describe these results and fit galaxy correlation measurements. Crucially, we find that the galaxy-matter cross-correlation coefficient, rgm(r) ≡ ξgm(r) . [ξmm(r)ξgg(r)]-1/2, is insensitive to assembly bias on scales r ≳ 1 h-1 Mpc, even though ξgm(r) and ξgg(r) are both affected individually. We can therefore recover the correct ξmm(r) from the HW13 galaxy-galaxy and galaxy-matter correlations using either a standard HOD or EDHOD fitting method. For Mr ≤ -19 or Mr ≤ -20 samples the recovery of ξmm(r) is accurate to 2 per cent or better. For a sample of red Mr ≤ -20 galaxies, we achieve 2 per cent recovery at r ≳ 2 h-1 Mpc with EDHOD modelling but lower accuracy at smaller scales or with a standard HOD fit. Most of our mock galaxy samples are consistent with rgm = 1 down to r = 1 h-1 Mpc, to within the uncertainties set by our finite simulation volume.

  17. The effects of assembly bias on the inference of matter clustering from galaxy-galaxy lensing and galaxy clustering

    NASA Astrophysics Data System (ADS)

    McEwen, Joseph E.; Weinberg, David H.

    2018-04-01

    The combination of galaxy-galaxy lensing (GGL) and galaxy clustering is a promising route to measuring the amplitude of matter clustering and testing modified gravity theories of cosmic acceleration. Halo occupation distribution (HOD) modeling can extend the approach down to nonlinear scales, but galaxy assembly bias could introduce systematic errors by causing the HOD to vary with large scale environment at fixed halo mass. We investigate this problem using the mock galaxy catalogs created by Hearin & Watson (2013, HW13), which exhibit significant assembly bias because galaxy luminosity is tied to halo peak circular velocity and galaxy colour is tied to halo formation time. The preferential placement of galaxies (especially red galaxies) in older halos affects the cutoff of the mean occupation function for central galaxies, with halos in overdense regions more likely to host galaxies. The effect of assembly bias on the satellite galaxy HOD is minimal. We introduce an extended, environment dependent HOD (EDHOD) prescription to describe these results and fit galaxy correlation measurements. Crucially, we find that the galaxy-matter cross-correlation coefficient, rgm(r) ≡ ξgm(r) . [ξmm(r)ξgg(r)]-1/2, is insensitive to assembly bias on scales r ≳ 1 h^{-1} Mpc, even though ξgm(r) and ξgg(r) are both affected individually. We can therefore recover the correct ξmm(r) from the HW13 galaxy-galaxy and galaxy-matter correlations using either a standard HOD or EDHOD fitting method. For Mr ≤ -19 or Mr ≤ -20 samples the recovery of ξmm(r) is accurate to 2% or better. For a sample of red Mr ≤ -20 galaxies we achieve 2% recovery at r ≳ 2 h^{-1} Mpc with EDHOD modeling but lower accuracy at smaller scales or with a standard HOD fit. Most of our mock galaxy samples are consistent with rgm = 1 down to r = 1h-1Mpc, to within the uncertainties set by our finite simulation volume.

  18. High resolution subsurface imaging using resonance-enhanced detection in 2nd-harmonic KPFM.

    PubMed

    Cadena, Maria Jose; Reifenberger, Ronald G; Raman, Arvind

    2018-06-28

    Second harmonic Kelvin probe force microscopy is a robust mechanism for subsurface imaging at the nanoscale. Here we exploit resonance-enhanced detection as a way to boost the subsurface contrast with higher force sensitivity using lower bias voltages, in comparison to the traditional off-resonance case. In this mode, the second harmonic signal of the electrostatic force is acquired at one of the eigenmode frequencies of the microcantilever. As a result, high-resolution subsurface images are obtained in a variety of nanocomposites. To further understand the subsurface imaging detection upon electrostatic forces, we use a finite element model that approximates the geometry of the probe and sample. This allows the investigation of the contrast mechanism, the depth sensitivity and lateral resolution depending on tip-sample properties. © 2018 IOP Publishing Ltd.

  19. Electronic correlation effects and the Coulomb gap at finite temperature.

    PubMed

    Sandow, B; Gloos, K; Rentzsch, R; Ionov, A N; Schirmacher, W

    2001-02-26

    We have investigated the effect of the long-range Coulomb interaction on the one-particle excitation spectrum of n-type germanium, using tunneling spectroscopy on mechanically controllable break junctions. At low temperatures, the tunnel conductance shows a minimum at zero bias voltage due to the Coulomb gap. Above 1 K, the gap is filled by thermal excitations. This behavior is reflected in the variable-range hopping resistivity measured on the same samples: up to a few degrees Kelvin the Efros-Shklovskii lnR infinity T(-1/2) law is obeyed, whereas at higher temperatures deviations from this law occur. The type of crossover differs from that considered previously in the literature.

  20. Implementation of Coupled Skin Temperature Analysis and Bias Correction in a Global Atmospheric Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Radakovich, Jon; Bosilovich, M.; Chern, Jiun-dar; daSilva, Arlindo

    2004-01-01

    The NASA/NCAR Finite Volume GCM (fvGCM) with the NCAR CLM (Community Land Model) version 2.0 was integrated into the NASA/GMAO Finite Volume Data Assimilation System (fvDAS). A new method was developed for coupled skin temperature assimilation and bias correction where the analysis increment and bias correction term is passed into the CLM2 and considered a forcing term in the solution to the energy balance. For our purposes, the fvDAS CLM2 was run at 1 deg. x 1.25 deg. horizontal resolution with 55 vertical levels. We assimilate the ISCCP-DX (30 km resolution) surface temperature product. The atmospheric analysis was performed 6-hourly, while the skin temperature analysis was performed 3-hourly. The bias correction term, which was updated at the analysis times, was added to the skin temperature tendency equation at every timestep. In this presentation, we focus on the validation of the surface energy budget at the in situ reference sites for the Coordinated Enhanced Observation Period (CEOP). We will concentrate on sites that include independent skin temperature measurements and complete energy budget observations for the month of July 2001. In addition, MODIS skin temperature will be used for validation. Several assimilations were conducted and preliminary results will be presented.

  1. Understanding Non-Equilibrium Charge Transport and Rectification at Chromophore/Metal Interfaces

    NASA Astrophysics Data System (ADS)

    Darancet, Pierre

    Understanding non-equilibrium charge and energy transport across nanoscale interfaces is central to developing an intuitive picture of fundamental processes in solar energy conversion applications. In this talk, I will discuss our theoretical studies of finite-bias transport at organic/metal interfaces. First, I will show how the finite-bias electronic structure of such systems can be quantitatively described using density functional theory in conjunction with simple models of non-local correlations and bias-induced Stark effects.. Using these methods, I will discuss the conditions of emergence of highly non-linear current-voltage characteristics in bilayers made of prototypical organic materials, and their implications in the context of hole- and electron-blocking layers in organic photovoltaic. In particular, I will show how the use of strongly-hybridized, fullerene-coated metallic surfaces as electrodes is a viable route to maximizing the diodic behavior and electrical functionality of molecular components. The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (Argonne). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357.

  2. Mechanical Pre-Stressing a Transducer through a Negative DC Biasing Field

    DTIC Science & Technology

    2017-04-21

    13  ii LIST OF ABBREVIATIONS AND ACRONYMS AC Alternating Current DC Direct Currant FEA Finite Element Analysis NUWC Naval...at resonance into tension is shown in figure 3; it was estimated from finite element analysis (FEA) that the tensional stresses exceeded 2000 psi...PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Stephen C. Butler 5.d PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION

  3. In situ biasing and off-axis electron holography of a ZnO nanowire

    NASA Astrophysics Data System (ADS)

    den Hertog, Martien; Donatini, Fabrice; McLeod, Robert; Monroy, Eva; Sartel, Corinne; Sallet, Vincent; Pernot, Julien

    2018-01-01

    Quantitative characterization of electrically active dopants and surface charges in nano-objects is challenging, since most characterization techniques using electrons [1-3], ions [4] or field ionization effects [5-7] study the chemical presence of dopants, which are not necessarily electrically active. We perform cathodoluminescence and voltage contrast experiments on a contacted and biased ZnO nanowire with a Schottky contact and measure the depletion length as a function of reverse bias. We compare these results with state-of-the-art off-axis electron holography in combination with electrical in situ biasing on the same nanowire. The extension of the depletion length under bias observed in scanning electron microscopy based techniques is unusual as it follows a linear rather than square root dependence, and is therefore difficult to model by bulk equations or finite element simulations. In contrast, the analysis of the axial depletion length observed by holography may be compared with three-dimensional simulations, which allows estimating an n-doping level of 1 × 1018 cm-3 and negative sidewall surface charge of 2.5 × 1012 cm-2 of the nanowire, resulting in a radial surface depletion to a depth of 36 nm. We found excellent agreement between the simulated diameter of the undepleted core and the active thickness observed in the experimental data. By combining TEM holography experiments and finite element simulation of the NW electrostatics, the bulk-like character of the nanowire core is revealed.

  4. Influence of finite geometrical asymmetry of the electrodes in capacitively coupled radio frequency plasma

    NASA Astrophysics Data System (ADS)

    Bora, B.; Soto, L.

    2014-08-01

    Capacitively coupled radio frequency (CCRF) plasmas are widely studied in last decades due to the versatile applicability of energetic ions, chemically active species, radicals, and also energetic neutral species in many material processing fields including microelectronics, aerospace, and biology. A dc self-bias is known to generate naturally in geometrically asymmetric CCRF plasma because of the difference in electrode sizes known as geometrical asymmetry of the electrodes in order to compensate electron and ion flux to each electrode within one rf period. The plasma series resonance effect is also come into play due to the geometrical asymmetry and excited several harmonics of the fundamental in low pressure CCRF plasma. In this work, a 13.56 MHz CCRF plasma is studied on the based on the nonlinear global model of asymmetric CCRF discharge to understand the influences of finite geometrical asymmetry of the electrodes in terms of generation of dc self-bias and plasma heating. The nonlinear global model on asymmetric discharge has been modified by considering the sheath at the grounded electrode to taking account the finite geometrical asymmetry of the electrodes. The ion density inside both the sheaths has been taken into account by incorporating the steady-state fluid equations for ions considering that the applied rf frequency is higher than the typical ion plasma frequency. Details results on the influences of geometrical asymmetry on the generation of dc self-bias and plasma heating are discussed.

  5. Near-optimal protocols in complex nonequilibrium transformations

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...

    2016-08-29

    The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less

  6. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  7. Spatial Convergence of Three Dimensional Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Anderson, W. Kyle

    2016-01-01

    Finite-volume and finite-element schemes, both implemented within the FUN3D flow solver, are evaluated for several test cases described on the Turbulence-Modeling Resource (TMR) web site. The cases include subsonic flow over a hemisphere cylinder, subsonic flow over a swept bump configuration, and supersonic flow in a square duct. The finite- volume and finite-element schemes are both used to obtain solutions for the first two cases, whereas only the finite-volume scheme is used for the supersonic duct. For the hemisphere cylinder, finite-element solutions obtained on tetrahedral meshes are compared with finite- volume solutions on mixed-element meshes. For the swept bump, finite-volume solutions have been obtained for both hexahedral and tetrahedral meshes and are compared with finite-element solutions obtained on tetrahedral meshes. For the hemisphere cylinder and the swept bump, solutions are obtained on a series of meshes with varying grid density and comparisons are made between drag coefficients, pressure distributions, velocity profiles, and profiles of the turbulence working variable. The square duct shows small variation due to element type or the spatial accuracy of turbulence model convection. It is demonstrated that the finite-element scheme on tetrahedral meshes yields similar accuracy as the finite- volume scheme on mixed-element and hexahedral grids, and demonstrates less sensitivity to the mesh topology (biased tetrahedral grids) than the finite-volume scheme.

  8. Has the 2008 financial crisis affected stock market efficiency? The case of Eurozone

    NASA Astrophysics Data System (ADS)

    Anagnostidis, P.; Varsakelis, C.; Emmanouilides, C. J.

    2016-04-01

    In this paper, the impact of the 2008 financial crisis on the weak-form efficiency of twelve Eurozone stock markets is investigated empirically. Efficiency is tested via the Generalized Hurst Exponent method, while dynamic Hurst exponents are estimated by means of the rolling window technique. To account for biases associated with the finite sample size and the leptokurtosis of the financial data, the statistical significance of the Hurst exponent estimates is assessed through a series of Monte-Carlo simulations drawn from the class of α-stable distributions. According to our results, the 2008 crisis has adversely affected stock price efficiency in most of the Eurozone capital markets, leading to the emergence of significant mean-reverting patterns in stock price movements.

  9. Boundary Closures for Fourth-order Energy Stable Weighted Essentially Non-Oscillatory Finite Difference Schemes

    NASA Technical Reports Server (NTRS)

    Fisher, Travis C.; Carpenter, Mark H.; Yamaleev, Nail K.; Frankel, Steven H.

    2009-01-01

    A general strategy exists for constructing Energy Stable Weighted Essentially Non Oscillatory (ESWENO) finite difference schemes up to eighth-order on periodic domains. These ESWENO schemes satisfy an energy norm stability proof for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, boundary closures are developed for the fourth-order ESWENO scheme that maintain wherever possible the WENO stencil biasing properties, while satisfying the summation-by-parts (SBP) operator convention, thereby ensuring stability in an L2 norm. Second-order, and third-order boundary closures are developed that achieve stability in diagonal and block norms, respectively. The global accuracy for the second-order closures is three, and for the third-order closures is four. A novel set of non-uniform flux interpolation points is necessary near the boundaries to simultaneously achieve 1) accuracy, 2) the SBP convention, and 3) WENO stencil biasing mechanics.

  10. Rational Learning and Information Sampling: On the "Naivety" Assumption in Sampling Explanations of Judgment Biases

    ERIC Educational Resources Information Center

    Le Mens, Gael; Denrell, Jerker

    2011-01-01

    Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them.…

  11. Experimental phase diagram of zero-bias conductance peaks in superconductor/semiconductor nanowire devices

    PubMed Central

    Chen, Jun; Yu, Peng; Stenger, John; Hocevar, Moïra; Car, Diana; Plissard, Sébastien R.; Bakkers, Erik P. A. M.; Stanescu, Tudor D.; Frolov, Sergey M.

    2017-01-01

    Topological superconductivity is an exotic state of matter characterized by spinless p-wave Cooper pairing of electrons and by Majorana zero modes at the edges. The first signature of topological superconductivity is a robust zero-bias peak in tunneling conductance. We perform tunneling experiments on semiconductor nanowires (InSb) coupled to superconductors (NbTiN) and establish the zero-bias peak phase in the space of gate voltage and external magnetic field. Our findings are consistent with calculations for a finite-length topological nanowire and provide means for Majorana manipulation as required for braiding and topological quantum bits. PMID:28913432

  12. Field-Tuned Superconductor-Insulator Transition with and without Current Bias.

    PubMed

    Bielejec, E; Wu, Wenhao

    2002-05-20

    The magnetic-field-tuned superconductor-insulator transition has been studied in ultrathin beryllium films quench condensed near 20 K. In the zero-current limit, a finite-size scaling analysis yields the scaling exponent product nuz = 1.35+/-0.10 and a critical sheet resistance, R(c), of about 1.2R(Q), with R(Q) = h/4e(2). However, in the presence of dc bias currents that are smaller than the zero-field critical currents, nuz becomes 0.75+/-0.10. This new set of exponents suggests that the field-tuned transitions with and without a dc bias current belong to different universality classes.

  13. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    PubMed

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  14. Mapping Species Distributions with MAXENT Using a Geographically Biased Sample of Presence Data: A Performance Assessment of Methods for Correcting Sampling Bias

    PubMed Central

    Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607

  15. The anchoring bias reflects rational use of cognitive resources.

    PubMed

    Lieder, Falk; Griffiths, Thomas L; M Huys, Quentin J; Goodman, Noah D

    2018-02-01

    Cognitive biases, such as the anchoring bias, pose a serious challenge to rational accounts of human cognition. We investigate whether rational theories can meet this challenge by taking into account the mind's bounded cognitive resources. We asked what reasoning under uncertainty would look like if people made rational use of their finite time and limited cognitive resources. To answer this question, we applied a mathematical theory of bounded rationality to the problem of numerical estimation. Our analysis led to a rational process model that can be interpreted in terms of anchoring-and-adjustment. This model provided a unifying explanation for ten anchoring phenomena including the differential effect of accuracy motivation on the bias towards provided versus self-generated anchors. Our results illustrate the potential of resource-rational analysis to provide formal theories that can unify a wide range of empirical results and reconcile the impressive capacities of the human mind with its apparently irrational cognitive biases.

  16. Quantum dynamics of a Josephson junction driven cavity mode system in the presence of voltage bias noise

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Blencowe, M. P.; Armour, A. D.; Rimberg, A. J.

    2017-09-01

    We give a semiclassical analysis of the average photon number as well as photon number variance (Fano factor F ) for a Josephson junction (JJ) embedded microwave cavity system, where the JJ is subject to a fluctuating (i.e., noisy) bias voltage with finite dc average. Through the ac Josephson effect, the dc voltage bias drives the effectively nonlinear microwave cavity mode into an amplitude squeezed state (F <1 ), as has been established previously [Armour et al., Phys. Rev. Lett. 111, 247001 (2013), 10.1103/PhysRevLett.111.247001], but bias noise acts to degrade this squeezing. We find that the sensitivity of the Fano factor to bias voltage noise depends qualitatively on which stable fixed point regime the system is in for the corresponding classical nonlinear steady-state dynamics. Furthermore, we show that the impact of voltage bias noise is most significant when the cavity is excited to states with large average photon number.

  17. Multiphysics elastodynamic finite element analysis of space debris deorbit stability and efficiency by electrodynamic tethers

    NASA Astrophysics Data System (ADS)

    Li, Gangqiang; Zhu, Zheng H.; Ruel, Stephane; Meguid, S. A.

    2017-08-01

    This paper developed a new multiphysics finite element method for the elastodynamic analysis of space debris deorbit by a bare flexible electrodynamic tether. Orbital motion limited theory and dynamics of flexible electrodynamic tethers are discretized by the finite element method, where the motional electric field is variant along the tether and coupled with tether deflection and motion. Accordingly, the electrical current and potential bias profiles of tether are solved together with the tether dynamics by the nodal position finite element method. The newly proposed multiphysics finite element method is applied to analyze the deorbit dynamics of space debris by electrodynamic tethers with a two-stage energy control strategy to ensure an efficient and stable deorbit process. Numerical simulations are conducted to study the coupled effect between the motional electric field and the tether dynamics. The results reveal that the coupling effect has a significant influence on the tether stability and the deorbit performance. It cannot be ignored when the libration and deflection of the tether are significant.

  18. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  19. Biased Brownian dynamics for rate constant calculation.

    PubMed

    Zou, G; Skeel, R D; Subramaniam, S

    2000-08-01

    An enhanced sampling method-biased Brownian dynamics-is developed for the calculation of diffusion-limited biomolecular association reaction rates with high energy or entropy barriers. Biased Brownian dynamics introduces a biasing force in addition to the electrostatic force between the reactants, and it associates a probability weight with each trajectory. A simulation loses weight when movement is along the biasing force and gains weight when movement is against the biasing force. The sampling of trajectories is then biased, but the sampling is unbiased when the trajectory outcomes are multiplied by their weights. With a suitable choice of the biasing force, more reacted trajectories are sampled. As a consequence, the variance of the estimate is reduced. In our test case, biased Brownian dynamics gives a sevenfold improvement in central processing unit (CPU) time with the choice of a simple centripetal biasing force.

  20. Diffusion in different models of active Brownian motion

    NASA Astrophysics Data System (ADS)

    Lindner, B.; Nicola, E. M.

    2008-04-01

    Active Brownian particles (ABP) have served as phenomenological models of self-propelled motion in biology. We study the effective diffusion coefficient of two one-dimensional ABP models (simplified depot model and Rayleigh-Helmholtz model) differing in their nonlinear friction functions. Depending on the choice of the friction function the diffusion coefficient does or does not attain a minimum as a function of noise intensity. We furthermore discuss the case of an additional bias breaking the left-right symmetry of the system. We show that this bias induces a drift and that it generally reduces the diffusion coefficient. For a finite range of values of the bias, both models can exhibit a maximum in the diffusion coefficient vs. noise intensity.

  1. Pore-scale simulations of drainage in granular materials: Finite size effects and the representative elementary volume

    NASA Astrophysics Data System (ADS)

    Yuan, Chao; Chareyre, Bruno; Darve, Félix

    2016-09-01

    A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the microstructure require frequent updates of the pore network.

  2. Stabilizing Selection, Purifying Selection, and Mutational Bias in Finite Populations

    PubMed Central

    Charlesworth, Brian

    2013-01-01

    Genomic traits such as codon usage and the lengths of noncoding sequences may be subject to stabilizing selection rather than purifying selection. Mutations affecting these traits are often biased in one direction. To investigate the potential role of stabilizing selection on genomic traits, the effects of mutational bias on the equilibrium value of a trait under stabilizing selection in a finite population were investigated, using two different mutational models. Numerical results were generated using a matrix method for calculating the probability distribution of variant frequencies at sites affecting the trait, as well as by Monte Carlo simulations. Analytical approximations were also derived, which provided useful insights into the numerical results. A novel conclusion is that the scaled intensity of selection acting on individual variants is nearly independent of the effective population size over a wide range of parameter space and is strongly determined by the logarithm of the mutational bias parameter. This is true even when there is a very small departure of the mean from the optimum, as is usually the case. This implies that studies of the frequency spectra of DNA sequence variants may be unable to distinguish between stabilizing and purifying selection. A similar investigation of purifying selection against deleterious mutations was also carried out. Contrary to previous suggestions, the scaled intensity of purifying selection with synergistic fitness effects is sensitive to population size, which is inconsistent with the general lack of sensitivity of codon usage to effective population size. PMID:23709636

  3. 75 FR 48815 - Medicaid Program and Children's Health Insurance Program (CHIP); Revisions to the Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ... size may be reduced by the finite population correction factor. The finite population correction is a statistical formula utilized to determine sample size where the population is considered finite rather than... program may notify us and the annual sample size will be reduced by the finite population correction...

  4. Thickness and temperature dependent electrical characteristics of crystalline BaxSr1-xTiO3 thin films

    NASA Astrophysics Data System (ADS)

    Panda, B.; Roy, A.; Dhar, A.; Ray, S. K.

    2007-03-01

    Polycrystalline Ba1-xSrxTiO3 (BST) thin films with three different compositions have been deposited by radio-frequency magnetron sputtering technique on platinum coated silicon substrates. Samples with buffer and barrier layers for different film thicknesses and processing temperatures have been studied. Crystallite size of BST films has been found to increase with increasing substrate temperature. Thickness dependent dielectric constant has been studied and discussed in the light of an interfacial dead layer and the finite screening length of the electrode. Ferroelectric properties of the films have also been studied for various deposition conditions. The electrical resistivity of the films measured at different temperatures shows a positive temperature coefficient of resistance under a constant bias voltage.

  5. Space charge limited current measurements on conjugated polymer films using conductive atomic force microscopy.

    PubMed

    Reid, Obadiah G; Munechika, Keiko; Ginger, David S

    2008-06-01

    We describe local (~150 nm resolution), quantitative measurements of charge carrier mobility in conjugated polymer films that are commonly used in thin-film transistors and nanostructured solar cells. We measure space charge limited currents (SCLC) through these films using conductive atomic force microscopy (c-AFM) and in macroscopic diodes. The current densities we measure with c-AFM are substantially higher than those observed in planar devices at the same bias. This leads to an overestimation of carrier mobility by up to 3 orders of magnitude when using the standard Mott-Gurney law to fit the c-AFM data. We reconcile this apparent discrepancy between c-AFM and planar device measurements by accounting for the proper tip-sample geometry using finite element simulations of tip-sample currents. We show that a semiempirical scaling factor based on the ratio of the tip contact area diameter to the sample thickness can be used to correct c-AFM current-voltage curves and thus extract mobilities that are in good agreement with values measured in the conventional planar device geometry.

  6. Analytical and sampling constraints in ²¹⁰Pb dating.

    PubMed

    MacKenzie, A B; Hardie, S M L; Farmer, J G; Eades, L J; Pulford, I D

    2011-03-01

    ²¹⁰Pb dating provides a valuable, widely used means of establishing recent chronologies for sediments and other accumulating natural deposits. The Constant Rate of Supply (CRS) model is the most versatile and widely used method for establishing ²¹⁰Pb chronologies but, when using this model, care must be taken to account for limitations imposed by sampling and analytical factors. In particular, incompatibility of finite values for empirical data, which are constrained by detection limit and core length, with terms in the age calculation, which represent integrations to infinity, can generate erroneously old ages for deeper sections of cores. The bias in calculated ages increases with poorer limit of detection and the magnitude of the disparity increases with age. The origin and magnitude of this effect are considered below, firstly for an idealized, theoretical ²¹⁰Pb profile and secondly for a freshwater lake sediment core. A brief consideration is presented of the implications of this potential artefact for sampling and analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    PubMed

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  8. INFERRING THE ECCENTRICITY DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed

    2010-12-20

    Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less

  9. Rational learning and information sampling: on the "naivety" assumption in sampling explanations of judgment biases.

    PubMed

    Le Mens, Gaël; Denrell, Jerker

    2011-04-01

    Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them. Here, we show that this "naivety" assumption is not necessary. Systematically biased judgments can emerge even when decision makers process available information perfectly and are also aware of how the information sample has been generated. Specifically, we develop a rational analysis of Denrell's (2005) experience sampling model, and we prove that when information search is interested rather than disinterested, even rational information sampling and processing can give rise to systematic patterns of errors in judgments. Our results illustrate that a tendency to favor alternatives for which outcome information is more accessible can be consistent with rational behavior. The model offers a rational explanation for behaviors that had previously been attributed to cognitive and motivational biases, such as the in-group bias or the tendency to prefer popular alternatives. 2011 APA, all rights reserved

  10. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer.

    PubMed

    Fetterly, Kenneth A; Favazza, Christopher P

    2016-08-07

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the presence of bias in Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.

  11. An experimental verification of laser-velocimeter sampling bias and its correction

    NASA Technical Reports Server (NTRS)

    Johnson, D. A.; Modarress, D.; Owen, F. K.

    1982-01-01

    The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.

  12. Calibrating the Planck Cluster Mass Scale with Cluster Velocity Dispersions

    NASA Astrophysics Data System (ADS)

    Amodeo, Stefania; Mei, Simona; Stanford, Spencer A.; Bartlett, James G.; Melin, Jean-Baptiste; Lawrence, Charles R.; Chary, Ranga-Ram; Shim, Hyunjin; Marleau, Francine; Stern, Daniel

    2017-08-01

    We measure the Planck cluster mass bias using dynamical mass measurements based on velocity dispersions of a subsample of 17 Planck-detected clusters. The velocity dispersions were calculated using redshifts determined from spectra that were obtained at the Gemini observatory with the GMOS multi-object spectrograph. We correct our estimates for effects due to finite aperture, Eddington bias, and correlated scatter between velocity dispersion and the Planck mass proxy. The result for the mass bias parameter, (1-b), depends on the value of the galaxy velocity bias, {b}{{v}}, adopted from simulations: (1-b)=(0.51+/- 0.09){b}{{v}}3. Using a velocity bias of {b}{{v}}=1.08 from Munari et al., we obtain (1-b)=0.64+/- 0.11, I.e., an error of 17% on the mass bias measurement with 17 clusters. This mass bias value is consistent with most previous weak-lensing determinations. It lies within 1σ of the value that is needed to reconcile the Planck cluster counts with the Planck primary cosmic microwave background constraints. We emphasize that uncertainty in the velocity bias severely hampers the precision of the measurements of the mass bias using velocity dispersions. On the other hand, when we fix the Planck mass bias using the constraints from Penna-Lima et al., based on weak-lensing measurements, we obtain a positive velocity bias of {b}{{v}}≳ 0.9 at 3σ .

  13. Testing for gene-environment interaction under exposure misspecification.

    PubMed

    Sun, Ryan; Carroll, Raymond J; Christiani, David C; Lin, Xihong

    2017-11-09

    Complex interplay between genetic and environmental factors characterizes the etiology of many diseases. Modeling gene-environment (GxE) interactions is often challenged by the unknown functional form of the environment term in the true data-generating mechanism. We study the impact of misspecification of the environmental exposure effect on inference for the GxE interaction term in linear and logistic regression models. We first examine the asymptotic bias of the GxE interaction regression coefficient, allowing for confounders as well as arbitrary misspecification of the exposure and confounder effects. For linear regression, we show that under gene-environment independence and some confounder-dependent conditions, when the environment effect is misspecified, the regression coefficient of the GxE interaction can be unbiased. However, inference on the GxE interaction is still often incorrect. In logistic regression, we show that the regression coefficient is generally biased if the genetic factor is associated with the outcome directly or indirectly. Further, we show that the standard robust sandwich variance estimator for the GxE interaction does not perform well in practical GxE studies, and we provide an alternative testing procedure that has better finite sample properties. © 2017, The International Biometric Society.

  14. Estimation of water table level and nitrate pollution based on geostatistical and multiple mass transport models

    NASA Astrophysics Data System (ADS)

    Matiatos, Ioannis; Varouhakis, Emmanouil A.; Papadopoulou, Maria P.

    2015-04-01

    As the sustainable use of groundwater resources is a great challenge for many countries in the world, groundwater modeling has become a very useful and well established tool for studying groundwater management problems. Based on various methods used to numerically solve algebraic equations representing groundwater flow and contaminant mass transport, numerical models are mainly divided into Finite Difference-based and Finite Element-based models. The present study aims at evaluating the performance of a finite difference-based (MODFLOW-MT3DMS), a finite element-based (FEFLOW) and a hybrid finite element and finite difference (Princeton Transport Code-PTC) groundwater numerical models simulating groundwater flow and nitrate mass transport in the alluvial aquifer of Trizina region in NE Peloponnese, Greece. The calibration of groundwater flow in all models was performed using groundwater hydraulic head data from seven stress periods and the validation was based on a series of hydraulic head data for two stress periods in sufficient numbers of observation locations. The same periods were used for the calibration of nitrate mass transport. The calibration and validation of the three models revealed that the simulated values of hydraulic heads and nitrate mass concentrations coincide well with the observed ones. The models' performance was assessed by performing a statistical analysis of these different types of numerical algorithms. A number of metrics, such as Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Bias, Nash Sutcliffe Model Efficiency (NSE) and Reliability Index (RI) were used allowing the direct comparison of models' performance. Spatiotemporal Kriging (STRK) was also applied using separable and non-separable spatiotemporal variograms to predict water table level and nitrate concentration at each sampling station for two selected hydrological stress periods. The predictions were validated using the respective measured values. Maps of water table level and nitrate concentrations were produced and compared with those obtained from groundwater and mass transport numerical models. Preliminary results showed similar efficiency of the spatiotemporal geostatistical method with the numerical models. However data requirements of the former model were significantly less. Advantages and disadvantages of the methods performance were analysed and discussed indicating the characteristics of the different approaches.

  15. Nonequilibrium self-energies, Ng approach, and heat current of a nanodevice for small bias voltage and temperature

    NASA Astrophysics Data System (ADS)

    Aligia, A. A.

    2014-03-01

    Using nonequilibrium renormalized perturbation theory to second order in the renormalized Coulomb repulsion, we calculate the lesser Σ< and and greater Σ> self-energies of the impurity Anderson model, which describes the current through a quantum dot, in the general asymmetric case. While in general a numerical integration is required to evaluate the perturbative result, we derive an analytical approximation for small frequency ω, bias voltage V, and temperature T, which is exact to total second order in these quantities. The approximation is valid when the corresponding energies ℏω, eV, and kBT are small compared to kBTK, where TK is the Kondo temperature. The result of the numerical integration is compared with the analytical one and with Ng approximation, in which Σ< and Σ> are assumed proportional to the retarded self-energy Σr times an average Fermi function. While it fails at T =0 for ℏ |ω|≲eV, we find that the Ng approximation is excellent for kBT>eV/2 and improves for asymmetric coupling to the leads. Even at T =0, the effect of the Ng approximation on the total occupation at the dot is very small. The dependence on ω and V are discussed in comparison with a Ward identity that is fulfilled by the three approaches. We also calculate the heat currents between the dot and any of the leads at finite bias voltage. One of the heat currents changes sign with the applied bias voltage at finite temperature.

  16. Remote sensing of earth terrain

    NASA Technical Reports Server (NTRS)

    Kong, Jin AU; Yueh, Herng-Aung; Shin, Robert T.

    1991-01-01

    Abstracts from 46 refereed journal and conference papers are presented for research on remote sensing of earth terrain. The topics covered related to remote sensing include the following: mathematical models, vegetation cover, sea ice, finite difference theory, electromagnetic waves, polarimetry, neural networks, random media, synthetic aperture radar, electromagnetic bias, and others.

  17. Fractional-order active fault-tolerant force-position controller design for the legged robots using saturated actuator with unknown bias and gain degradation

    NASA Astrophysics Data System (ADS)

    Farid, Yousef; Majd, Vahid Johari; Ehsani-Seresht, Abbas

    2018-05-01

    In this paper, a novel fault accommodation strategy is proposed for the legged robots subject to the actuator faults including actuation bias and effective gain degradation as well as the actuator saturation. First, the combined dynamics of two coupled subsystems consisting of the dynamics of the legs subsystem and the body subsystem are developed. Then, the interaction of the robot with the environment is formulated as the contact force optimization problem with equality and inequality constraints. The desired force is obtained by a dynamic model. A robust super twisting fault estimator is proposed to precisely estimate the defective torque amplitude of the faulty actuator in finite time. Defining a novel fractional sliding surface, a fractional nonsingular terminal sliding mode control law is developed. Moreover, by introducing a suitable auxiliary system and using its state vector in the designed controller, the proposed fault-tolerant control (FTC) scheme guarantees the finite-time stability of the closed-loop control system. The robustness and finite-time convergence of the proposed control law is established using the Lyapunov stability theory. Finally, numerical simulations are performed on a quadruped robot to demonstrate the stable walking of the robot with and without actuator faults, and actuator saturation constraints, and the results are compared to results with an integer order fault-tolerant controller.

  18. Amplifier for measuring low-level signals in the presence of high common mode voltage

    NASA Technical Reports Server (NTRS)

    Lukens, F. E. (Inventor)

    1985-01-01

    A high common mode rejection differential amplifier wherein two serially arranged Darlington amplifier stages are employed and any common mode voltage is divided between them by a resistance network. The input to the first Darlington amplifier stage is coupled to a signal input resistor via an amplifier which isolates the input and presents a high impedance across this resistor. The output of the second Darlington stage is transposed in scale via an amplifier stage which has its input a biasing circuit which effects a finite biasing of the two Darlington amplifier stages.

  19. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    PubMed

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  20. Bias Assessment of General Chemistry Analytes using Commutable Samples.

    PubMed

    Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter

    2014-11-01

    Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.

  1. The late Neandertal supraorbital fossils from Vindija Cave, Croatia: a biased sample?

    PubMed

    Ahern, James C M; Lee, Sang-Hee; Hawks, John D

    2002-09-01

    The late Neandertal sample from Vindija (Croatia) has been described as transitional between the earlier Central European Neandertals from Krapina (Croatia) and modern humans. However, the morphological differences indicating this transition may rather be the result of different sex and/or age compositions between the samples. This study tests the hypothesis that the metric differences between the Krapina and Vindija supraorbital samples are due to sampling bias. We focus upon the supraorbital region because past studies have posited this region as particularly indicative of the Vindija sample's transitional nature. Furthermore, the supraorbital region varies significantly with both age and sex. We analyzed four chords and two derived indices of supraorbital torus form as defined by Smith & Ranyard (1980, Am. J. phys. Anthrop.93, pp. 589-610). For each variable, we analyzed relative sample bias of the Krapina and Vindija samples using three sampling methods. In order to test the hypothesis that the Vindija sample contains an over-representation of females and/or young while the Krapina sample is normal or also female/young biased, we determined the probability of drawing a sample of the same size as and with a mean equal to or less than Vindija's from a Krapina-based population. In order to test the hypothesis that the Vindija sample is female/young biased while the Krapina sample is male/old biased, we determined the probability of drawing a sample of the same size as and with a mean equal or less than Vindija's from a generated population whose mean is halfway between Krapina's and Vindija's. Finally, in order to test the hypothesis that the Vindija sample is normal while the Krapina sample contains an over-representation of males and/or old, we determined the probability of drawing a sample of the same size as and with a mean equal to or greater than Krapina's from a Vindija-based population. Unless we assume that the Vindija sample is female/young and the Krapina sample is male/old biased, our results falsify the hypothesis that the metric differences between the Krapina and Vindija samples are due to sample bias.

  2. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  3. Statistical properties of Fourier-based time-lag estimates

    NASA Astrophysics Data System (ADS)

    Epitropakis, A.; Papadakis, I. E.

    2016-06-01

    Context. The study of X-ray time-lag spectra in active galactic nuclei (AGN) is currently an active research area, since it has the potential to illuminate the physics and geometry of the innermost region (I.e. close to the putative super-massive black hole) in these objects. To obtain reliable information from these studies, the statistical properties of time-lags estimated from data must be known as accurately as possible. Aims: We investigated the statistical properties of Fourier-based time-lag estimates (I.e. based on the cross-periodogram), using evenly sampled time series with no missing points. Our aim is to provide practical "guidelines" on estimating time-lags that are minimally biased (I.e. whose mean is close to their intrinsic value) and have known errors. Methods: Our investigation is based on both analytical work and extensive numerical simulations. The latter consisted of generating artificial time series with various signal-to-noise ratios and sampling patterns/durations similar to those offered by AGN observations with present and past X-ray satellites. We also considered a range of different model time-lag spectra commonly assumed in X-ray analyses of compact accreting systems. Results: Discrete sampling, binning and finite light curve duration cause the mean of the time-lag estimates to have a smaller magnitude than their intrinsic values. Smoothing (I.e. binning over consecutive frequencies) of the cross-periodogram can add extra bias at low frequencies. The use of light curves with low signal-to-noise ratio reduces the intrinsic coherence, and can introduce a bias to the sample coherence, time-lag estimates, and their predicted error. Conclusions: Our results have direct implications for X-ray time-lag studies in AGN, but can also be applied to similar studies in other research fields. We find that: a) time-lags should be estimated at frequencies lower than ≈ 1/2 the Nyquist frequency to minimise the effects of discrete binning of the observed time series; b) smoothing of the cross-periodogram should be avoided, as this may introduce significant bias to the time-lag estimates, which can be taken into account by assuming a model cross-spectrum (and not just a model time-lag spectrum); c) time-lags should be estimated by dividing observed time series into a number, say m, of shorter data segments and averaging the resulting cross-periodograms; d) if the data segments have a duration ≳ 20 ks, the time-lag bias is ≲15% of its intrinsic value for the model cross-spectra and power-spectra considered in this work. This bias should be estimated in practise (by considering possible intrinsic cross-spectra that may be applicable to the time-lag spectra at hand) to assess the reliability of any time-lag analysis; e) the effects of experimental noise can be minimised by only estimating time-lags in the frequency range where the sample coherence is larger than 1.2/(1 + 0.2m). In this range, the amplitude of noise variations caused by measurement errors is smaller than the amplitude of the signal's intrinsic variations. As long as m ≳ 20, time-lags estimated by averaging over individual data segments have analytical error estimates that are within 95% of the true scatter around their mean, and their distribution is similar, albeit not identical, to a Gaussian.

  4. Charging in the ac Conductance of a Double Barrier Resonant Tunneling Structure

    NASA Technical Reports Server (NTRS)

    Anantram, M. P.; Saini, Subhash (Technical Monitor)

    1998-01-01

    There have been many studies of the linear response ac conductance of a double barrier resonant tunneling structure (DBRTS), both at zero and finite dc biases. While these studies are important, they fail to self consistently include the effect of the time dependent charge density in the well. In this paper, we calculate the ac conductance at both zero and finite do biases by including the effect of the time dependent charge density in the well in a self consistent manner. The charge density in the well contributes to both the flow of displacement currents in the contacts and the time dependent potential in the well. We find that including these effects can make a significant difference to the ac conductance and the total ac current is not equal to the simple average of the non-selfconsistently calculated conduction currents in the two contacts. This is illustrated by comparing the results obtained with and without the effect of the time dependent charge density included correctly. Some possible experimental scenarios to observe these effects are suggested.

  5. Effect of electron-vibration interactions on the thermoelectric efficiency of molecular junctions.

    PubMed

    Hsu, Bailey C; Chiang, Chi-Wei; Chen, Yu-Chang

    2012-07-11

    From first-principles approaches, we investigate the thermoelectric efficiency of a molecular junction where a benzene molecule is connected directly to the platinum electrodes. We calculate the thermoelectric figure of merit ZT in the presence of electron-vibration interactions with and without local heating under two scenarios: linear response and finite bias regimes. In the linear response regime, ZT saturates around the electrode temperature T(e) = 25 K in the elastic case, while in the inelastic case we observe a non-saturated and a much larger ZT beyond T(e) = 25 K attributed to the tail of the Fermi-Dirac distribution. In the finite bias regime, the inelastic effects reveal the signatures of the molecular vibrations in the low-temperature regime. The normal modes exhibiting structures in the inelastic profile are characterized by large components of atomic vibrations along the current density direction on top of each individual atom. In all cases, the inclusion of local heating leads to a higher wire temperature T(w) and thus magnifies further the influence of the electron-vibration interactions due to the increased number of local phonons.

  6. Distinguishing topological Majorana bound states from trivial Andreev bound states: Proposed tests through differential tunneling conductance spectroscopy

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Xiao; Sau, Jay D.; Das Sarma, S.

    2018-06-01

    Trivial Andreev bound states arising from chemical-potential variations could lead to zero-bias tunneling conductance peaks at finite magnetic field in class-D nanowires, precisely mimicking the predicted zero-bias conductance peaks arising from the topological Majorana bound states. This finding raises a serious question on the efficacy of using zero-bias tunneling conductance peaks, by themselves, as evidence supporting the existence of topological Majorana bound states in nanowires. In the current work, we provide specific experimental protocols for tunneling spectroscopy measurements to distinguish between Andreev and Majorana bound states without invoking more demanding nonlocal measurements which have not yet been successfully performed in nanowire systems. In particular, we discuss three distinct experimental schemes involving the response of the zero-bias peak to local perturbations of the tunnel barrier, the overlap of bound states from the wire ends, and, most compellingly, introducing a sharp localized potential in the wire itself to perturb the zero-bias tunneling peaks. We provide extensive numerical simulations clarifying and supporting our theoretical predictions.

  7. Grouping methods for estimating the prevalences of rare traits from complex survey data that preserve confidentiality of respondents.

    PubMed

    Hyun, Noorie; Gastwirth, Joseph L; Graubard, Barry I

    2018-03-26

    Originally, 2-stage group testing was developed for efficiently screening individuals for a disease. In response to the HIV/AIDS epidemic, 1-stage group testing was adopted for estimating prevalences of a single or multiple traits from testing groups of size q, so individuals were not tested. This paper extends the methodology of 1-stage group testing to surveys with sample weighted complex multistage-cluster designs. Sample weighted-generalized estimating equations are used to estimate the prevalences of categorical traits while accounting for the error rates inherent in the tests. Two difficulties arise when using group testing in complex samples: (1) How does one weight the results of the test on each group as the sample weights will differ among observations in the same group. Furthermore, if the sample weights are related to positivity of the diagnostic test, then group-level weighting is needed to reduce bias in the prevalence estimation; (2) How does one form groups that will allow accurate estimation of the standard errors of prevalence estimates under multistage-cluster sampling allowing for intracluster correlation of the test results. We study 5 different grouping methods to address the weighting and cluster sampling aspects of complex designed samples. Finite sample properties of the estimators of prevalences, variances, and confidence interval coverage for these grouping methods are studied using simulations. National Health and Nutrition Examination Survey data are used to illustrate the methods. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics.

    PubMed

    Galvelis, Raimondas; Sugita, Yuji

    2017-06-13

    The free energy calculations of complex chemical and biological systems with molecular dynamics (MD) are inefficient due to multiple local minima separated by high-energy barriers. The minima can be escaped using an enhanced sampling method such as metadynamics, which apply bias (i.e., importance sampling) along a set of collective variables (CV), but the maximum number of CVs (or dimensions) is severely limited. We propose a high-dimensional bias potential method (NN2B) based on two machine learning algorithms: the nearest neighbor density estimator (NNDE) and the artificial neural network (ANN) for the bias potential approximation. The bias potential is constructed iteratively from short biased MD simulations accounting for correlation among CVs. Our method is capable of achieving ergodic sampling and calculating free energy of polypeptides with up to 8-dimensional bias potential.

  9. Discrete Fractional Component Monte Carlo Simulation Study of Dilute Nonionic Surfactants at the Air-Water Interface.

    PubMed

    Yoo, Brian; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Jusufi, Arben; Maginn, Edward J

    2017-09-26

    We present a newly developed Monte Carlo scheme to predict bulk surfactant concentrations and surface tensions at the air-water interface for various surfactant interfacial coverages. Since the concentration regimes of these systems of interest are typically very dilute (≪10 -5 mol. frac.), Monte Carlo simulations with the use of insertion/deletion moves can provide the ability to overcome finite system size limitations that often prohibit the use of modern molecular simulation techniques. In performing these simulations, we use the discrete fractional component Monte Carlo (DFCMC) method in the Gibbs ensemble framework, which allows us to separate the bulk and air-water interface into two separate boxes and efficiently swap tetraethylene glycol surfactants C 10 E 4 between boxes. Combining this move with preferential translations, volume biased insertions, and Wang-Landau biasing vastly enhances sampling and helps overcome the classical "insertion problem", often encountered in non-lattice Monte Carlo simulations. We demonstrate that this methodology is both consistent with the original molecular thermodynamic theory (MTT) of Blankschtein and co-workers, as well as their recently modified theory (MD/MTT), which incorporates the results of surfactant infinite dilution transfer free energies and surface tension calculations obtained from molecular dynamics simulations.

  10. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    PubMed

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Avoiding treatment bias of REDD+ monitoring by sampling with partial replacement.

    PubMed

    Köhl, Michael; Scott, Charles T; Lister, Andrew J; Demon, Inez; Plugge, Daniel

    2015-12-01

    Implementing REDD+ renders the development of a measurement, reporting and verification (MRV) system necessary to monitor carbon stock changes. MRV systems generally apply a combination of remote sensing techniques and in-situ field assessments. In-situ assessments can be based on 1) permanent plots, which are assessed on all successive occasions, 2) temporary plots, which are assessed only once, and 3) a combination of both. The current study focuses on in-situ assessments and addresses the effect of treatment bias, which is introduced by managing permanent sampling plots differently than the surrounding forests. Temporary plots are not subject to treatment bias, but are associated with large sampling errors and low cost-efficiency. Sampling with partial replacement (SPR) utilizes both permanent and temporary plots. We apply a scenario analysis with different intensities of deforestation and forest degradation to show that SPR combines cost-efficiency with the handling of treatment bias. Without treatment bias permanent plots generally provide lower sampling errors for change estimates than SPR and temporary plots, but do not provide reliable estimates, if treatment bias occurs, SPR allows for change estimates that are comparable to those provided by permanent plots, offers the flexibility to adjust sample sizes in the course of time, and allows to compare data on permanent versus temporary plots for detecting treatment bias. Equivalence of biomass or carbon stock estimates between permanent and temporary plots serves as an indication for the absence of treatment bias while differences suggest that there is evidence for treatment bias. SPR is a flexible tool for estimating emission factors from successive measurements. It does not entirely depend on sample plots that are installed at the first occasion but allows for the adjustment of sample sizes and placement of new plots at any occasion. This ensures that in-situ samples provide representative estimates over time. SPR offers the possibility to increase sampling intensity in areas with high degradation intensities or to establish new plots in areas where permanent plots are lost due to deforestation. SPR is also an ideal approach to mitigate concerns about treatment bias.

  12. Characterizing sampling and quality screening biases in infrared and microwave limb sounding

    NASA Astrophysics Data System (ADS)

    Millán, Luis F.; Livesey, Nathaniel J.; Santee, Michelle L.; von Clarmann, Thomas

    2018-03-01

    This study investigates orbital sampling biases and evaluates the additional impact caused by data quality screening for the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) and the Aura Microwave Limb Sounder (MLS). MIPAS acts as a proxy for typical infrared limb emission sounders, while MLS acts as a proxy for microwave limb sounders. These biases were calculated for temperature and several trace gases by interpolating model fields to real sampling patterns and, additionally, screening those locations as directed by their corresponding quality criteria. Both instruments have dense uniform sampling patterns typical of limb emission sounders, producing almost identical sampling biases. However, there is a substantial difference between the number of locations discarded. MIPAS, as a mid-infrared instrument, is very sensitive to clouds, and measurements affected by them are thus rejected from the analysis. For example, in the tropics, the MIPAS yield is strongly affected by clouds, while MLS is mostly unaffected. The results show that upper-tropospheric sampling biases in zonally averaged data, for both instruments, can be up to 10 to 30 %, depending on the species, and up to 3 K for temperature. For MIPAS, the sampling reduction due to quality screening worsens the biases, leading to values as large as 30 to 100 % for the trace gases and expanding the 3 K bias region for temperature. This type of sampling bias is largely induced by the geophysical origins of the screening (e.g. clouds). Further, analysis of long-term time series reveals that these additional quality screening biases may affect the ability to accurately detect upper-tropospheric long-term changes using such data. In contrast, MLS data quality screening removes sufficiently few points that no additional bias is introduced, although its penetration is limited to the upper troposphere, while MIPAS may cover well into the mid-troposphere in cloud-free scenarios. We emphasize that the results of this study refer only to the representativeness of the respective data, not to their intrinsic quality.

  13. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  14. Connes' embedding problem and winning strategies for quantum XOR games

    NASA Astrophysics Data System (ADS)

    Harris, Samuel J.

    2017-12-01

    We consider quantum XOR games, defined in the work of Regev and Vidick [ACM Trans. Comput. Theory 7, 43 (2015)], from the perspective of unitary correlations defined in the work of Harris and Paulsen [Integr. Equations Oper. Theory 89, 125 (2017)]. We show that the winning bias of a quantum XOR game in the tensor product model (respectively, the commuting model) is equal to the norm of its associated linear functional on the unitary correlation set from the appropriate model. We show that Connes' embedding problem has a positive answer if and only if every quantum XOR game has entanglement bias equal to the commuting bias. In particular, the embedding problem is equivalent to determining whether every quantum XOR game G with a winning strategy in the commuting model also has a winning strategy in the approximate finite-dimensional model.

  15. Empirical Validation of a Procedure to Correct Position and Stimulus Biases in Matching-to-Sample

    ERIC Educational Resources Information Center

    Kangas, Brian D.; Branch, Marc N.

    2008-01-01

    The development of position and stimulus biases often occurs during initial training on matching-to-sample tasks. Furthermore, without intervention, these biases can be maintained via intermittent reinforcement provided by matching-to-sample contingencies. The present study evaluated the effectiveness of a correction procedure designed to…

  16. Method and apparatus for differential spectroscopic atomic-imaging using scanning tunneling microscopy

    DOEpatents

    Kazmerski, Lawrence L.

    1990-01-01

    A Method and apparatus for differential spectroscopic atomic-imaging is disclosed for spatial resolution and imaging for display not only individual atoms on a sample surface, but also bonding and the specific atomic species in such bond. The apparatus includes a scanning tunneling microscope (STM) that is modified to include photon biasing, preferably a tuneable laser, modulating electronic surface biasing for the sample, and temperature biasing, preferably a vibration-free refrigerated sample mounting stage. Computer control and data processing and visual display components are also included. The method includes modulating the electronic bias voltage with and without selected photon wavelengths and frequency biasing under a stabilizing (usually cold) bias temperature to detect bonding and specific atomic species in the bonds as the STM rasters the sample. This data is processed along with atomic spatial topography data obtained from the STM raster scan to create a real-time visual image of the atoms on the sample surface.

  17. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  18. Mitigate the impact of transmitter finite extinction ratio using K-means clustering algorithm for 16QAM signal

    NASA Astrophysics Data System (ADS)

    Yu, Miao; Li, Yan; Shu, Tong; Zhang, Yifan; Hong, Xiaobin; Qiu, Jifang; Zuo, Yong; Guo, Hongxiang; Li, Wei; Wu, Jian

    2018-02-01

    A method of recognizing 16QAM signal based on k-means clustering algorithm is proposed to mitigate the impact of transmitter finite extinction ratio. There are pilot symbols with 0.39% overhead assigned to be regarded as initial centroids of k-means clustering algorithm. Simulation result in 10 GBaud 16QAM system shows that the proposed method obtains higher precision of identification compared with traditional decision method for finite ER and IQ mismatch. Specially, the proposed method improves the required OSNR by 5.5 dB, 4.5 dB, 4 dB and 3 dB at FEC limit with ER= 12 dB, 16 dB, 20 dB and 24 dB, respectively, and the acceptable bias error and IQ mismatch range is widened by 767% and 360% with ER =16 dB, respectively.

  19. Finite strain analysis of metavolcanics and metapyroclastics in gold-bearing shear zone of the Dungash area, Central Eastern Desert, Egypt

    NASA Astrophysics Data System (ADS)

    Kassem, Osama M. K.; Abd El Rahim, Said H.

    2014-11-01

    The Dungash gold mine area is situated in an EW-trending quartz vein along a shear zone in metavolcanic and metasedimentary host rocks in the Eastern Desert of Egypt. These rocks are associated with the major geologic structures, which are attributed to various deformational stages of the Neoproterozoic basement rocks. Field geology, finite strain and microstructural analyses were carried out and the relation-ships between the lithological contacts and major/minor structures have been studied. The R f/ϕ and Fry methods were applied on the metavolcano-sedimentary and metapyroclastic samples from 5 quartz veins samples, 7 metavolcanics samples, 3 metasedimentary samples and 4 metapyroclastic samples in Dungash area. Finite-strain data show that a low to moderate range of deformation of the metavolcano-sedimentary samples and axial ratios in the XZ section range from 1.70 to 4.80 for the R f/ϕ method and from 1.65 to 4.50 for the Fry method. We conclude that finite strain in the deformed rocks is of the same order of magnitude for all units of metavolcano-sedimentary rocks. Furthermore, the contact between principal rock units is sheared in the Dungash area under brittle to semi-ductile deformation conditions. In this case, the accumulated finite strain is associated with the deformation during thrusting to assemble nappe structure. It indicates that the sheared contacts have been formed during the accumulation of finite strain.

  20. Tunnel transport and interlayer excitons in bilayer fractional quantum Hall systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yuhe; Jain, J. K.; Eisenstein, J. P.

    2017-05-01

    In a bilayer system consisting of a composite-fermion (CF) Fermi sea in each layer, the tunnel current is exponentially suppressed at zero bias, followed by a strong peak at a finite-bias voltage Vmax. This behavior, which is qualitatively different from that observed for the electron Fermi sea, provides fundamental insight into the strongly correlated non-Fermi-liquid nature of the CF Fermi sea and, in particular, offers a window into the short-distance high-energy physics of this highly nontrivial state. We identify the exciton responsible for the peak current and provide a quantitative account of the value of Vmax. The excitonic attraction is shown to be quantitatively significant, and its variation accounts for the increase of Vmax with the application of an in-plane magnetic field. We also estimate the critical Zeeman energy where transition occurs from a fully spin-polarized composite-fermion Fermi sea to a partially spin-polarized one, carefully incorporating corrections due to finite width and Landau level mixing, and find it to be in satisfactory agreement with the Zeeman energy where a qualitative change has been observed for the onset bias voltage [J. P. Eisenstein et al., Phys. Rev. B 94, 125409 (2016), 10.1103/PhysRevB.94.125409]. For fractional quantum Hall states, we predict a substantial discontinuous jump in Vmax when the system undergoes a transition from a fully spin-polarized state to a spin singlet or a partially spin-polarized state.

  1. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  2. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  3. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  4. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  5. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach. [Kansas

    NASA Technical Reports Server (NTRS)

    Hixson, M. M.; Bauer, M. E.; Davis, B. J.

    1979-01-01

    The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.

  6. Improved finite difference schemes for transonic potential calculations

    NASA Technical Reports Server (NTRS)

    Hafez, M.; Osher, S.; Whitlow, W., Jr.

    1984-01-01

    Engquist and Osher (1980) have introduced a finite difference scheme for solving the transonic small disturbance equation, taking into account cases in which only compression shocks are admitted. Osher et al. (1983) studied a class of schemes for the full potential equation. It is proved that these schemes satisfy a new discrete 'entropy inequality' which rules out expansion shocks. However, the conducted analysis is restricted to steady two-dimensional flows. The present investigation is concerned with the adoption of a heuristic approach. The full potential equation in conservation form is solved with the aid of a modified artificial density method, based on flux biasing. It is shown that, with the current scheme, expansion shocks are not possible.

  7. Implications of weight-based stigma and self-bias on quality of life among individuals with Schizophrenia

    PubMed Central

    Barber, Jessica; Palmese, Laura; Reutenauer, Erin L.; Grilo, Carlos; Tek, Cenk

    2011-01-01

    Obesity has been associated with significant stigma and weight-related self-bias in community and clinical studies, but these issues have not been studied among individuals with schizophrenia. A consecutive series of 70 obese individuals with schizophrenia or schizoaffective disorder underwent assessment for perceptions of weight-based stigmatization, self-directed weight-bias, negative affect, medication compliance, and quality of life. Levels of weight-based stigmatization and self-bias were compared to levels reported for non-psychiatric overweight/obese samples. Weight measures were unrelated to stigma, self-bias, affect, and quality of life. Weight-based stigmatization was lower than published levels for non-psychiatric samples, whereas levels of weight-based self-bias did not differ. After controlling for negative affect, weight-based self-bias predicted an additional 11% of the variance in the quality of life measure. Individuals with schizophrenia and schizoaffective disorder reported weight-based self-bias to the same extent as non-psychiatric samples despite reporting less weight stigma. Weight-based self-bias was associated with poorer quality of life after controlling for negative affect. PMID:21716053

  8. Molecular wires acting as quantum heat ratchets.

    PubMed

    Zhan, Fei; Li, Nianbei; Kohler, Sigmund; Hänggi, Peter

    2009-12-01

    We explore heat transfer in molecular junctions between two leads in the absence of a finite net thermal bias. The application of an unbiased time-periodic temperature modulation of the leads entails a dynamical breaking of reflection symmetry, such that a directed heat current may emerge (ratchet effect). In particular, we consider two cases of adiabatically slow driving, namely, (i) periodic temperature modulation of only one lead and (ii) temperature modulation of both leads with an ac driving that contains a second harmonic, thus, generating harmonic mixing. Both scenarios yield sizable directed heat currents, which should be detectable with present techniques. Adding a static thermal bias allows one to compute the heat current-thermal load characteristics, which includes the ratchet effect of negative thermal bias with positive-valued heat flow against the thermal bias, up to the thermal stop load. The ratchet heat flow in turn generates also an electric current. An applied electric stop voltage, yielding effective zero electric current flow, then mimics a solely heat-ratchet-induced thermopower ("ratchet Seebeck effect"), although no net thermal bias is acting. Moreover, we find that the relative phase between the two harmonics in scenario (ii) enables steering the net heat current into a direction of choice.

  9. Electric shielding films for biased TEM samples and their application to in situ electron holography.

    PubMed

    Nomura, Yuki; Yamamoto, Kazuo; Hirayama, Tsukasa; Saitoh, Koh

    2018-06-01

    We developed a novel sample preparation method for transmission electron microscopy (TEM) to suppress superfluous electric fields leaked from biased TEM samples. In this method, a thin TEM sample is first coated with an insulating amorphous aluminum oxide (AlOx) film with a thickness of about 20 nm. Then, the sample is coated with a conductive amorphous carbon film with a thickness of about 10 nm, and the film is grounded. This technique was applied to a model sample of a metal electrode/Li-ion-conductive-solid-electrolyte/metal electrode for biasing electron holography. We found that AlOx film with a thickness of 10 nm has a large withstand voltage of about 8 V and that double layers of AlOx and carbon act as a 'nano-shield' to suppress 99% of the electric fields outside of the sample. We also found an asymmetry potential distribution between high and low potential electrodes in biased solid-electrolyte, indicating different accumulation behaviors of lithium-ions (Li+) and lithium-ion vacancies (VLi-) in the biased solid-electrolyte.

  10. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model

    PubMed Central

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-01-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543–2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic–Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. PMID:26977060

  11. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    PubMed

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-05

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. © 2016 The Authors.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereyra, Pedro, E-mail: pereyrapedro@gmail.com; Mendoza-Figueroa, M. G.

    Transport properties of electrons through biased double barrier semiconductor structures with finite transverse width w{sub y}, in the presence of a channel-mixing transverse electric field E{sub T} (along the y-axis), were studied. We solve the multichannel Schrödinger equation using the transfer matrix method and transport properties, like the conductance G and the transmission coefficients T{sub ij} have been evaluated as functions of the electrons' energy E and the transverse and longitudinal (bias) electric forces, f{sub T} and f{sub b}. We show that peak-suppression effects appear, due to the applied bias. Similarly, coherent interference of wave-guide states induced by the transversemore » field is obtained. We show also that the coherent interference of resonant wave-guide states gives rise to resonant conductance, which can be tuned to produce broad resonant peaks, implying operation frequencies of the order of 10 THz or larger.« less

  13. Influence of residual thermal stresses and geometric parameters on stress and electric fields in multilayer ceramic capacitors under electric bias

    NASA Astrophysics Data System (ADS)

    Jiang, Wu-Gui; Feng, Xi-Qiao; Nan, Ce-Wen

    2008-07-01

    The stress and electric fields in multilayer ceramic capacitors (MLCCs) under an applied electric bias were investigated by using a three-dimensional finite element model of ferroelectric ceramics. A coupled thermal-mechanical analysis was first made to calculate the residual thermal stress induced by the sintering process, and then a coupled electrical-mechanical analysis was performed to predict the total stress distribution in the MLCCs under a representative applied electric bias. The effects of the number of dielectric layers, the single layer thickness as well as the residual thermal stresses on the total stresses were all examined. The numerical results show that the residual thermal stress induced by the sintering process has a significant influence on the contribution of the total stresses and, therefore, should be taken into account in the design and evaluation of MLCC devices.

  14. Sampling Biases in MODIS and SeaWiFS Ocean Chlorophyll Data

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Casey, Nancy W.

    2007-01-01

    Although modem ocean color sensors, such as MODIS and SeaWiFS are often considered global missions, in reality it takes many days, even months, to sample the ocean surface enough to provide complete global coverage. The irregular temporal sampling of ocean color sensors can produce biases in monthly and annual mean chlorophyll estimates. We quantified the biases due to sampling using data assimilation to create a "truth field", which we then sub-sampled using the observational patterns of MODIS and SeaWiFS. Monthly and annual mean chlorophyll estimates from these sub-sampled, incomplete daily fields were constructed and compared to monthly and annual means from the complete daily fields of the assimilation model, at a spatial resolution of 1.25deg longitude by 0.67deg latitude. The results showed that global annual mean biases were positive, reaching nearly 8% (MODIS) and >5% (SeaWiFS). For perspective the maximum interannual variability in the SeaWiFS chlorophyll record was about 3%. Annual mean sampling biases were low (<3%) in the midlatitudes (between -40deg and 40deg). Low interannual variability in the global annual mean sampling biases suggested that global scale trend analyses were valid. High latitude biases were much higher than the global annual means, up to 20% as a basin annual mean, and over 80% in some months. This was the result of the high solar zenith angle exclusion in the processing algorithms. Only data where the solar angle is <75deg are permitted, in contrast to the assimilation which samples regularly over the entire area and month. High solar zenith angles do not facilitate phytoplankton photosynthesis and consequently low chlorophyll concentrations occurring here are missed by the data sets. Ocean color sensors selectively sample in locations and times of favorable phytoplankton growth, producing overestimates of chlorophyll. The biases derived from lack of sampling in the high latitudes varied monthly, leading to artifacts in the apparent seasonal cycle from ocean color sensors. A false secondary peak in chlorophyll occurred in May-August, which resulted from lack of sampling in the Antarctic.

  15. Stochastic density functional theory at finite temperatures

    NASA Astrophysics Data System (ADS)

    Cytter, Yael; Rabani, Eran; Neuhauser, Daniel; Baer, Roi

    2018-03-01

    Simulations in the warm dense matter regime using finite temperature Kohn-Sham density functional theory (FT-KS-DFT), while frequently used, are computationally expensive due to the partial occupation of a very large number of high-energy KS eigenstates which are obtained from subspace diagonalization. We have developed a stochastic method for applying FT-KS-DFT, that overcomes the bottleneck of calculating the occupied KS orbitals by directly obtaining the density from the KS Hamiltonian. The proposed algorithm scales as O (" close=")N3T3)">N T-1 and is compared with the high-temperature limit scaling O The Effects of Finite Sampling on State Assessment Sample Requirements. NAEP Validity Studies. Working Paper Series.

    ERIC Educational Resources Information Center

    Chromy, James R.

    This study addressed statistical techniques that might ameliorate some of the sampling problems currently facing states with small populations participating in State National Assessment of Educational Progress (NAEP) assessments. The study explored how the application of finite population correction factors to the between-school component of…

  16. Sampling bias in blending validation and a different approach to homogeneity assessment.

    PubMed

    Kraemer, J; Svensson, J R; Melgaard, H

    1999-02-01

    Sampling of batches studied for validation is reported. A thief particularly suited for granules, rather than cohesive powders, was used in the study. It is shown, as has been demonstrated in the past, that traditional 1x to 3x thief sampling of a blend is biased, and that the bias decreases as the sample size increases. It is shown that taking 50 samples of tablets after blending and testing this subpopulation for normality is a discriminating manner of testing for homogeneity. As a criterion, it is better than sampling at mixer or drum stage would be even if an unbiased sampling device were available.

  17. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  18. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  1. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...

  2. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...

  3. 45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more than...

  4. Estimating Dungeness crab (Cancer magister) abundance: Crab pots and dive transects compared

    USGS Publications Warehouse

    Taggart, S. James; O'Clair, Charles E.; Shirley, Thomas C.; Mondragon, Jennifer

    2004-01-01

    Dungeness crabs (Cancer magister) were sampled with commercial pots and counted by scuba divers on benthic transects at eight sites near Glacier Bay, Alaska. Catch per unit of effort (CPUE) from pots was compared to the density estimates from dives to evaluate the bias and power of the two techniques. Yearly sampling was conducted in two seasons: April and September, from 1992 to 2000. Male CPUE estimates from pots were significantly lower in April than in the following September; a step-wise regression demonstrated that season accounted for more of the variation in male CPUE than did temperature. In both April and September, pot sampling was significantly biased against females. When females were categorized as ovigerous and nonovigerous, it was clear that ovigerous females accounted for the majority of the bias because pots were not biased against nonovigerous females. We compared the power of pots and dive transects in detecting trends in populations and found that pots had much higher power than dive transects. Despite their low power, the dive transects were very useful for detecting bias in our pot sampling and in identifying the optimal times of year to sample so that pot bias could be avoided.

  5. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    NASA Astrophysics Data System (ADS)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  6. A new u-statistic with superior design sensitivity in matched observational studies.

    PubMed

    Rosenbaum, Paul R

    2011-09-01

    In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology. © 2010, The International Biometric Society.

  7. Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.

    PubMed

    Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua

    2016-09-05

    In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.

  8. Efficient global biopolymer sampling with end-transfer configurational bias Monte Carlo

    NASA Astrophysics Data System (ADS)

    Arya, Gaurav; Schlick, Tamar

    2007-01-01

    We develop an "end-transfer configurational bias Monte Carlo" method for efficient thermodynamic sampling of complex biopolymers and assess its performance on a mesoscale model of chromatin (oligonucleosome) at different salt conditions compared to other Monte Carlo moves. Our method extends traditional configurational bias by deleting a repeating motif (monomer) from one end of the biopolymer and regrowing it at the opposite end using the standard Rosenbluth scheme. The method's sampling efficiency compared to local moves, pivot rotations, and standard configurational bias is assessed by parameters relating to translational, rotational, and internal degrees of freedom of the oligonucleosome. Our results show that the end-transfer method is superior in sampling every degree of freedom of the oligonucleosomes over other methods at high salt concentrations (weak electrostatics) but worse than the pivot rotations in terms of sampling internal and rotational sampling at low-to-moderate salt concentrations (strong electrostatics). Under all conditions investigated, however, the end-transfer method is several orders of magnitude more efficient than the standard configurational bias approach. This is because the characteristic sampling time of the innermost oligonucleosome motif scales quadratically with the length of the oligonucleosomes for the end-transfer method while it scales exponentially for the traditional configurational-bias method. Thus, the method we propose can significantly improve performance for global biomolecular applications, especially in condensed systems with weak nonbonded interactions and may be combined with local enhancements to improve local sampling.

  9. Nonlinear vs. linear biasing in Trp-cage folding simulations

    NASA Astrophysics Data System (ADS)

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-01

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  10. Nonlinear vs. linear biasing in Trp-cage folding simulations.

    PubMed

    Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.

  11. A proof of the Woodward-Lawson sampling method for a finite linear array

    NASA Technical Reports Server (NTRS)

    Somers, Gary A.

    1993-01-01

    An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.

  12. Information Repetition in Evaluative Judgments: Easy to Monitor, Hard to Control

    ERIC Educational Resources Information Center

    Unkelbach, Christian; Fiedler, Klaus; Freytag, Peter

    2007-01-01

    The sampling approach [Fiedler, K. (2000a). "Beware of samples! A cognitive-ecological sampling approach to judgment biases." "Psychological Review, 107"(4), 659-676.] attributes judgment biases to the information given in a sample. Because people usually do not monitor the constraints of samples and do not control their judgments accordingly,…

  13. Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference

    PubMed Central

    Shringarpure, Suyash; Xing, Eric P.

    2014-01-01

    Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351

  14. A finite parallel zone model to interpret and extend Giddings' coupling theory for the eddy-dispersion in porous chromatographic media.

    PubMed

    Desmet, Gert

    2013-11-01

    The finite length parallel zone (FPZ)-model is proposed as an alternative model for the axial- or eddy-dispersion caused by the occurrence of local velocity biases or flow heterogeneities in porous media such as those used in liquid chromatography columns. The mathematical plate height expression evolving from the model shows that the A- and C-term band broadening effects that can originate from a given velocity bias should be coupled in an exponentially decaying way instead of harmonically as proposed in Giddings' coupling theory. In the low and high velocity limit both models converge, while a 12% difference can be observed in the (practically most relevant) intermediate range of reduced velocities. Explicit expressions for the A- and C-constants appearing in the exponential decay-based plate height expression have been derived for each of the different possible velocity bias levels (single through-pore and particle level, multi-particle level and trans-column level). These expressions allow to directly relate the band broadening originating from these different levels to the local fundamental transport parameters, hence offering the possibility to include a velocity-dependent and, if, needed retention factor-dependent transversal dispersion coefficient. Having developed the mathematics for the general case wherein a difference in retention equilibrium establishes between the two parallel zones, the effect of any possible local variations in packing density and/or retention capacity on the eddy-dispersion can be explicitly accounted for as well. It is furthermore also shown that, whereas the lumped transport parameter model used in the basic variant of the FPZ-model only provides a first approximation of the true decay constant, the model can be extended by introducing a constant correction factor to correctly account for the continuous transversal dispersion transport in the velocity bias zones. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Bootstrap Estimation of Sample Statistic Bias in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Fan, Xitao

    This study empirically investigated bootstrap bias estimation in the area of structural equation modeling (SEM). Three correctly specified SEM models were used under four different sample size conditions. Monte Carlo experiments were carried out to generate the criteria against which bootstrap bias estimation should be judged. For SEM fit indices,…

  16. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  17. Finite-volume Atmospheric Model of the IAP/LASG (FAMIL)

    NASA Astrophysics Data System (ADS)

    Bao, Q.

    2015-12-01

    The Finite-volume Atmospheric Model of the IAP/LASG (FAMIL) is introduced in this work. FAMIL have the flexible horizontal and vertical resolutions up to 25km and 1Pa respectively, which currently running on the "Tianhe 1A&2" supercomputers. FAMIL is the atmospheric component of the third-generation Flexible Global Ocean-Atmosphere-Land climate System model (FGOALS3) which will participate in the Coupled Model Intercomparison Project Phase 6 (CMIP6). In addition to describing the dynamical core and physical parameterizations of FAMIL, this talk describes the simulated characteristics of energy and water balances, precipitation, Asian Summer Monsoon and stratospheric circulation, and compares them with observational/reanalysis data. Finally, the model biases as well as possible solutions are discussed.

  18. Current noise generated by spin imbalance in presence of spin relaxation

    NASA Astrophysics Data System (ADS)

    Khrapai, V. S.; Nagaev, K. E.

    2017-01-01

    We calculate current (shot) noise in a metallic diffusive conductor generated by spin imbalance in the absence of a net electric current. This situation is modeled in an idealized three-terminal setup with two biased ferromagnetic leads (F-leads) and one normal lead (N-lead). Parallel magnetization of the F-leads gives rise to spin-imbalance and finite shot noise at the N-lead. Finite spin relaxation results in an increase in the shot noise, which depends on the ratio of the length of the conductor ( L) and the spin relaxation length ( l s). For L >> l s the shot noise increases by a factor of two and coincides with the case of the antiparallel magnetization of the F-leads.

  19. Finite-size effects in simulations of electrolyte solutions under periodic boundary conditions

    NASA Astrophysics Data System (ADS)

    Thompson, Jeffrey; Sanchez, Isaac

    The equilibrium properties of charged systems with periodic boundary conditions may exhibit pronounced system-size dependence due to the long range of the Coulomb force. As shown by others, the leading-order finite-size correction to the Coulomb energy of a charged fluid confined to a periodic box of volume V may be derived from sum rules satisfied by the charge-charge correlations in the thermodynamic limit V -> ∞ . In classical systems, the relevant sum rule is the Stillinger-Lovett second-moment (or perfect screening) condition. This constraint implies that for large V, periodicity induces a negative bias of -kB T(2 V) - 1 in the total Coulomb energy density of a homogeneous classical charged fluid of given density and temperature. We present a careful study of the impact of such finite-size effects on the calculation of solute chemical potentials from explicit-solvent molecular simulations of aqueous electrolyte solutions. National Science Foundation Graduate Research Fellowship Program, Grant No. DGE-1610403.

  20. Bias-induced conformational switching of supramolecular networks of trimesic acid at the solid-liquid interface

    NASA Astrophysics Data System (ADS)

    Ubink, J.; Enache, M.; Stöhr, M.

    2018-05-01

    Using the tip of a scanning tunneling microscope, an electric field-induced reversible phase transition between two planar porous structures ("chickenwire" and "flower") of trimesic acid was accomplished at the nonanoic acid/highly oriented pyrolytic graphite interface. The chickenwire structure was exclusively observed for negative sample bias, while for positive sample bias only the more densely packed flower structure was found. We suggest that the slightly negatively charged carboxyl groups of the trimesic acid molecule are the determining factor for this observation: their adsorption behavior varies with the sample bias and is thus responsible for the switching behavior.

  1. Predicting discovery rates of genomic features.

    PubMed

    Gravel, Simon

    2014-06-01

    Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict "omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing 5% of a population is sufficient to predict the number of genetic variants in the entire population within 6% of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require ∼15% of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and subsampled 1000 Genomes Project data. Extrapolating based on the National Heart, Lung, and Blood Institute Exome Sequencing Project data, we predict that 7.2% of sites in the capture region would be variable in a sample of 50,000 African Americans and 8.8% in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types. Copyright © 2014 by the Genetics Society of America.

  2. A novel measure of effect size for mediation analysis.

    PubMed

    Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken

    2018-06-01

    Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Accurate aging of juvenile salmonids using fork lengths

    USGS Publications Warehouse

    Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua

    2017-01-01

    Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.

  4. Sources of Sampling Bias in Long-Screened Well

    EPA Science Inventory

    Results obtained from ground-water sampling in long-screened wells are often influenced by physical factors such as geologic heterogeneity and vertical hydraulic gradients. These factors often serve to bias results and increase uncertainty in the representativeness of the sample...

  5. Selection bias in population-based cancer case-control studies due to incomplete sampling frame coverage.

    PubMed

    Walsh, Matthew C; Trentham-Dietz, Amy; Gangnon, Ronald E; Nieto, F Javier; Newcomb, Polly A; Palta, Mari

    2012-06-01

    Increasing numbers of individuals are choosing to opt out of population-based sampling frames due to privacy concerns. This is especially a problem in the selection of controls for case-control studies, as the cases often arise from relatively complete population-based registries, whereas control selection requires a sampling frame. If opt out is also related to risk factors, bias can arise. We linked breast cancer cases who reported having a valid driver's license from the 2004-2008 Wisconsin women's health study (N = 2,988) with a master list of licensed drivers from the Wisconsin Department of Transportation (WDOT). This master list excludes Wisconsin drivers that requested their information not be sold by the state. Multivariate-adjusted selection probability ratios (SPR) were calculated to estimate potential bias when using this driver's license sampling frame to select controls. A total of 962 cases (32%) had opted out of the WDOT sampling frame. Cases age <40 (SPR = 0.90), income either unreported (SPR = 0.89) or greater than $50,000 (SPR = 0.94), lower parity (SPR = 0.96 per one-child decrease), and hormone use (SPR = 0.93) were significantly less likely to be covered by the WDOT sampling frame (α = 0.05 level). Our results indicate the potential for selection bias due to differential opt out between various demographic and behavioral subgroups of controls. As selection bias may differ by exposure and study base, the assessment of potential bias needs to be ongoing. SPRs can be used to predict the direction of bias when cases and controls stem from different sampling frames in population-based case-control studies.

  6. Evaluation of bias and logistics in a survey of adults at increased risk for oral health decrements.

    PubMed

    Gilbert, G H; Duncan, R P; Kulley, A M; Coward, R T; Heft, M W

    1997-01-01

    Designing research to include sufficient respondents in groups at highest risk for oral health decrements can present unique challenges. Our purpose was to evaluate bias and logistics in this survey of adults at increased risk for oral health decrements. We used a telephone survey methodology that employed both listed numbers and random digit dialing to identify dentate persons 45 years old or older and to oversample blacks, poor persons, and residents of nonmetropolitan counties. At a second stage, a subsample of the respondents to the initial telephone screening was selected for further study, which consisted of a baseline in-person interview and a clinical examination. We assessed bias due to: (1) limiting the sample to households with telephones, (2) using predominantly listed numbers instead of random digit dialing, and (3) nonresponse at two stages of data collection. While this approach apparently created some biases in the sample, they were small in magnitude. Specifically, limiting the sample to households with telephones biased the sample overall toward more females, larger households, and fewer functionally impaired persons. Using predominantly listed numbers led to a modest bias toward selection of persons more likely to be younger, healthier, female, have had a recent dental visit, and reside in smaller households. Blacks who were selected randomly at a second stage were more likely to participate in baseline data gathering than their white counterparts. Comparisons of the data obtained in this survey with those from recent national surveys suggest that this methodology for sampling high-risk groups did not substantively bias the sample with respect to two important dental parameters, prevalence of edentulousness and dental care use, nor were conclusions about multivariate associations with dental care recency substantively affected. This method of sampling persons at high risk for oral health decrements resulted in only modest bias with respect to the population of interest.

  7. Junctionless Diode Enabled by Self-Bias Effect of Ion Gel in Single-Layer MoS2 Device.

    PubMed

    Khan, Muhammad Atif; Rathi, Servin; Park, Jinwoo; Lim, Dongsuk; Lee, Yoontae; Yun, Sun Jin; Youn, Doo-Hyeb; Kim, Gil-Ho

    2017-08-16

    The self-biasing effects of ion gel from source and drain electrodes on electrical characteristics of single layer and few layer molybdenum disulfide (MoS 2 ) field-effect transistor (FET) have been studied. The self-biasing effect of ion gel is tested for two different configurations, covered and open, where ion gel is in contact with either one or both, source and drain electrodes, respectively. In open configuration, the linear output characteristics of the pristine device becomes nonlinear and on-off ratio drops by 3 orders of magnitude due to the increase in "off" current for both single and few layer MoS 2 FETs. However, the covered configuration results in a highly asymmetric output characteristics with a rectification of around 10 3 and an ideality factor of 1.9. This diode like behavior has been attributed to the reduction of Schottky barrier width by the electric field of self-biased ion gel, which enables an efficient injection of electrons by tunneling at metal-MoS 2 interface. Finally, finite element method based simulations are carried out and the simulated results matches well in principle with the experimental analysis. These self-biased diodes can perform a crucial role in the development of high-frequency optoelectronic and valleytronic devices.

  8. The role of spinal concave–convex biases in the progression of idiopathic scoliosis

    PubMed Central

    Driscoll, Mark; Moreau, Alain; Villemure, Isabelle; Parent, Stefan

    2009-01-01

    Inadequate understanding of risk factors involved in the progression of idiopathic scoliosis restrains initial treatment to observation until the deformity shows signs of significant aggravation. The purpose of this analysis is to explore whether the concave–convex biases associated with scoliosis (local degeneration of the intervertebral discs, nucleus migration, and local increase in trabecular bone-mineral density of vertebral bodies) may be identified as progressive risk factors. Finite element models of a 26° right thoracic scoliotic spine were constructed based on experimental and clinical observations that included growth dynamics governed by mechanical stimulus. Stress distribution over the vertebral growth plates, progression of Cobb angles, and vertebral wedging were explored in models with and without the biases of concave–convex properties. The inclusion of the bias of concave–convex properties within the model both augmented the asymmetrical loading of the vertebral growth plates by up to 37% and further amplified the progression of Cobb angles and vertebral wedging by as much as 5.9° and 0.8°, respectively. Concave–convex biases are factors that influence the progression of scoliotic curves. Quantifying these parameters in a patient with scoliosis may further provide a better clinical assessment of the risk of progression. PMID:19130096

  9. Quantile regression models of animal habitat relationships

    USGS Publications Warehouse

    Cade, Brian S.

    2003-01-01

    Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.

  10. Analysing home-ownership of couples: the effect of selecting couples at the time of the survey.

    PubMed

    Mulder, C H

    1996-09-01

    "The analysis of events encountered by couple and family households may suffer from sample selection bias when data are restricted to couples existing at the moment of interview. The paper discusses the effect of sample selection bias on event history analyses of buying a home [in the Netherlands] by comparing analyses performed on a sample of existing couples with analyses of a more complete sample including past as well as current partner relationships. The results show that, although home-buying in relationships that have ended differs clearly from behaviour in existing relationships, sample selection bias is not alarmingly large." (SUMMARY IN FRE) excerpt

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Khoi T.; Lilly, Michael P.; Nielsen, Erik

    We report Pauli blockade in a multielectron silicon metal–oxide–semiconductor double quantum dot with an integrated charge sensor. The current is rectified up to a blockade energy of 0.18 ± 0.03 meV. The blockade energy is analogous to singlet–triplet splitting in a two electron double quantum dot. Built-in imbalances of tunnel rates in the MOS DQD obfuscate some edges of the bias triangles. A method to extract the bias triangles is described, and a numeric rate-equation simulation is used to understand the effect of tunneling imbalances and finite temperature on charge stability (honeycomb) diagram, in particular the identification of missing andmore » shifting edges. A bound on relaxation time of the triplet-like state is also obtained from this measurement.« less

  12. Verification of a non-hydrostatic dynamical core using horizontally spectral element vertically finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-06-01

    The non-hydrostatic (NH) compressible Euler equations of dry atmosphere are solved in a simplified two dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative terms and quadrature. The Euler equations used here are in a flux form based on the hydrostatic pressure vertical coordinate, which are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate is implemented in this model. We verified the model by conducting widely used standard benchmark tests: the inertia-gravity wave, rising thermal bubble, density current wave, and linear hydrostatic mountain wave. The results from those tests demonstrate that the horizontally spectral element vertically finite difference model is accurate and robust. By using the 2-D slice model, we effectively show that the combined spatial discretization method of the spectral element and finite difference method in the horizontal and vertical directions, respectively, offers a viable method for the development of a NH dynamical core.

  13. Effect of Malmquist bias on correlation studies with IRAS data base

    NASA Technical Reports Server (NTRS)

    Verter, Frances

    1993-01-01

    The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.

  14. Bias correction in species distribution models: pooling survey and collection data for multiple species.

    PubMed

    Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A

    2015-04-01

    Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence-absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence-absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model's performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence-absence data for a given species is scarceIf we have only presence-only data and no presence-absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species' geographic range.

  15. Bias correction in species distribution models: pooling survey and collection data for multiple species

    PubMed Central

    Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A.

    2016-01-01

    Summary Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence–absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence–absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model’s performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence–absence data for a given species is scarceIf we have only presence-only data and no presence–absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species’ geographic range. PMID:27840673

  16. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  17. The role of observer bias in the North American Breeding Bird Survey

    USGS Publications Warehouse

    Faanes, C.A.; Bystrak, D.

    1981-01-01

    Ornithologists sampling breeding bird populations are subject to a number of biases in bird recognition and identification. Using Breeding Bird Survey data, these biases are examined qualitatively and quantitatively, and their effects on counts are evaluated. Differences in hearing ability and degree of expertise are the major observer biases considered. Other, more subtle influences are also discussed, including unfamiliar species, resolution, imagination, similar songs and attitude and condition of observers. In most cases, welltrained observers are comparable in ability and their differences contribute little beyond sampling error. However, just as hearing loss can affect results, so can an unprepared observer. These biases are important because they can reduce the credibility of any bird population sampling effort. Care is advised in choosing observers and in interpreting and using results when observers of variable competence are involved.

  18. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices

    PubMed Central

    Westgate, Philip M.

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator. PMID:27818539

  19. Generalized site occupancy models allowing for false positive and false negative errors

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.

    2006-01-01

    Site occupancy models have been developed that allow for imperfect species detection or ?false negative? observations. Such models have become widely adopted in surveys of many taxa. The most fundamental assumption underlying these models is that ?false positive? errors are not possible. That is, one cannot detect a species where it does not occur. However, such errors are possible in many sampling situations for a number of reasons, and even low false positive error rates can induce extreme bias in estimates of site occupancy when they are not accounted for. In this paper, we develop a model for site occupancy that allows for both false negative and false positive error rates. This model can be represented as a two-component finite mixture model and can be easily fitted using freely available software. We provide an analysis of avian survey data using the proposed model and present results of a brief simulation study evaluating the performance of the maximum-likelihood estimator and the naive estimator in the presence of false positive errors.

  20. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices.

    PubMed

    Westgate, Philip M

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.

  1. Gravitational lensing frequencies - Galaxy cross-sections and selection effects

    NASA Technical Reports Server (NTRS)

    Fukugita, Masataka; Turner, Edwin L.

    1991-01-01

    Four issues - (1) the best currently available data on the galaxy velocity-dispersion distribution, (2) the effects of finite core radii potential ellipticity on lensing cross sections, (3) the predicted distribution of lens image separations compared to observational angular resolutions, and (4) the preferential inclusion of lens systems in flux limited samples - are considered in order to facilitate more realistic predictions of multiple image galaxy-quasar lensing frequencies. It is found that (1) the SIS lensing parameter F equals 0.047 +/-0.019 with almost 90 percent contributed by E and S0 galaxies, (2) observed E and S0 core radii are remarkably small, yielding a factor of less than about 2 reduction in total lensing cross sections, (3) 50 percent of galaxy-quasar lenses have image separations greater than about 1.3 arcsec, and (4) amplification bias factors are large and must be carefully taken into account. It is concluded that flat universe models excessively dominated by the cosmological constant are not favored by the small observed galaxy-quasar lensing rate.

  2. Selection within households in health surveys

    PubMed Central

    Alves, Maria Cecilia Goi Porto; Escuder, Maria Mercedes Loureiro; Claro, Rafael Moreira; da Silva, Nilza Nunes

    2014-01-01

    OBJECTIVE To compare the efficiency and accuracy of sampling designs including and excluding the sampling of individuals within sampled households in health surveys. METHODS From a population survey conducted in Baixada Santista Metropolitan Area, SP, Southeastern Brazil, lowlands between 2006 and 2007, 1,000 samples were drawn for each design and estimates for people aged 18 to 59 and 18 and over were calculated for each sample. In the first design, 40 census tracts, 12 households per sector, and one person per household were sampled. In the second, no sampling within the household was performed and 40 census sectors and 6 households for the 18 to 59-year old group and 5 or 6 for the 18 and over age group or more were sampled. Precision and bias of proportion estimates for 11 indicators were assessed in the two final sets of the 1000 selected samples with the two types of design. They were compared by means of relative measurements: coefficient of variation, bias/mean ratio, bias/standard error ratio, and relative mean square error. Comparison of costs contrasted basic cost per person, household cost, number of people, and households. RESULTS Bias was found to be negligible for both designs. A lower precision was found in the design including individuals sampling within households, and the costs were higher. CONCLUSIONS The design excluding individual sampling achieved higher levels of efficiency and accuracy and, accordingly, should be first choice for investigators. Sampling of household dwellers should be adopted when there are reasons related to the study subject that may lead to bias in individual responses if multiple dwellers answer the proposed questionnaire. PMID:24789641

  3. Sampling bias in an international internet survey of diversion programs in the criminal justice system.

    PubMed

    Hartford, Kathleen; Carey, Robert; Mendonca, James

    2007-03-01

    Despite advances in the storage and retrieval of information within health care systems, health researchers conducting surveys for evaluations still face technical barriers that may lead to sampling bias. The authors describe their experience in administering a Web-based, international survey to English-speaking countries. Identifying the sample was a multistage effort involving (a) searching for published e-mail addresses, (b) conducting Web searches for publicly funded agencies, and (c) performing literature searches, personal contacts, and extensive Internet searches for individuals. After pretesting, the survey was converted into an electronic format accessible by multiple Web browsers. Sampling bias arose from (a) system incompatibility, which did not allow potential respondents to open the survey, (b) varying institutional gate-keeping policies that "recognized" the unsolicited survey as spam, (c) culturally unique program terminology, which confused some respondents, and (d) incomplete sampling frames. Solutions are offered to the first three problems, and the authors note that sampling bias remains a crucial problem.

  4. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  5. Verification and rectification of the physical analogy of simulated annealing for the solution of the traveling salesman problem.

    PubMed

    Hasegawa, M

    2011-03-01

    The aim of the present study is to elucidate how simulated annealing (SA) works in its finite-time implementation by starting from the verification of its conventional optimization scenario based on equilibrium statistical mechanics. Two and one supplementary experiments, the design of which is inspired by concepts and methods developed for studies on liquid and glass, are performed on two types of random traveling salesman problems. In the first experiment, a newly parameterized temperature schedule is introduced to simulate a quasistatic process along the scenario and a parametric study is conducted to investigate the optimization characteristics of this adaptive cooling. In the second experiment, the search trajectory of the Metropolis algorithm (constant-temperature SA) is analyzed in the landscape paradigm in the hope of drawing a precise physical analogy by comparison with the corresponding dynamics of glass-forming molecular systems. These two experiments indicate that the effectiveness of finite-time SA comes not from equilibrium sampling at low temperature but from downward interbasin dynamics occurring before equilibrium. These dynamics work most effectively at an intermediate temperature varying with the total search time and thus this effective temperature is identified using the Deborah number. To test directly the role of these relaxation dynamics in the process of cooling, a supplementary experiment is performed using another parameterized temperature schedule with a piecewise variable cooling rate and the effect of this biased cooling is examined systematically. The results show that the optimization performance is not only dependent on but also sensitive to cooling in the vicinity of the above effec-tive temperature and that this feature is interpreted as a consequence of the presence or absence of the workable interbasin dynamics. It is confirmed for the present instances that the effectiveness of finite-time SA derives from the glassy relaxation dynamics occurring in the "landscape-influenced" temperature regime and that its naive optimization scenario should be rectified by considering the analogy with vitrification phenomena. A comprehensive guideline for the design of finite-time SA and SA-related algorithms is discussed on the basis of this rectified analogy.

  6. Nonlinear vs. linear biasing in Trp-cage folding simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spiwok, Vojtěch, E-mail: spiwokv@vscht.cz; Oborský, Pavel; Králová, Blanka

    2015-03-21

    Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energymore » minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.« less

  7. Assessing total nitrogen in surface-water samples--precision and bias of analytical and computational methods

    USGS Publications Warehouse

    Rus, David L.; Patton, Charles J.; Mueller, David K.; Crawford, Charles G.

    2013-01-01

    The characterization of total-nitrogen (TN) concentrations is an important component of many surface-water-quality programs. However, three widely used methods for the determination of total nitrogen—(1) derived from the alkaline-persulfate digestion of whole-water samples (TN-A); (2) calculated as the sum of total Kjeldahl nitrogen and dissolved nitrate plus nitrite (TN-K); and (3) calculated as the sum of dissolved nitrogen and particulate nitrogen (TN-C)—all include inherent limitations. A digestion process is intended to convert multiple species of nitrogen that are present in the sample into one measureable species, but this process may introduce bias. TN-A results can be negatively biased in the presence of suspended sediment, and TN-K data can be positively biased in the presence of elevated nitrate because some nitrate is reduced to ammonia and is therefore counted twice in the computation of total nitrogen. Furthermore, TN-C may not be subject to bias but is comparatively imprecise. In this study, the effects of suspended-sediment and nitrate concentrations on the performance of these TN methods were assessed using synthetic samples developed in a laboratory as well as a series of stream samples. A 2007 laboratory experiment measured TN-A and TN-K in nutrient-fortified solutions that had been mixed with varying amounts of sediment-reference materials. This experiment identified a connection between suspended sediment and negative bias in TN-A and detected positive bias in TN-K in the presence of elevated nitrate. A 2009–10 synoptic-field study used samples from 77 stream-sampling sites to confirm that these biases were present in the field samples and evaluated the precision and bias of TN methods. The precision of TN-C and TN-K depended on the precision and relative amounts of the TN-component species used in their respective TN computations. Particulate nitrogen had an average variability (as determined by the relative standard deviation) of 13 percent. However, because particulate nitrogen constituted only 14 percent, on average, of TN-C, the precision of the TN-C method approached that of the method for dissolved nitrogen (2.3 percent). On the other hand, total Kjeldahl nitrogen (having a variability of 7.6 percent) constituted an average of 40 percent of TN-K, suggesting that the reduced precision of the Kjeldahl digestion may affect precision of the TN-K estimates. For most samples, the precision of TN computed as TN-C would be better (lower variability) than the precision of TN-K. In general, TN-A precision (having a variability of 2.1 percent) was superior to TN-C and TN-K methods. The laboratory experiment indicated that negative bias in TN-A was present across the entire range of sediment concentration and increased as sediment concentration increased. This suggested that reagent limitation was not the predominant cause of observed bias in TN-A. Furthermore, analyses of particulate nitrogen present in digest residues provided an almost complete accounting for the nitrogen that was underestimated by alkaline-persulfate digestion. This experiment established that, for the reference materials at least, negative bias in TN-A was caused primarily by the sequestration of some particulate nitrogen that was refractory to the digestion process. TN-K biases varied between positive and negative values in the laboratory experiment. Positive bias in TN-K is likely the result of the unintended reduction of a small and variable amount of nitrate to ammonia during the Kjeldahl digestion process. Negative TN-K bias may be the result of the sequestration of a portion of particulate nitrogen during the digestion process. Negative bias in TN-A was present across the entire range of suspended-sediment concentration (1 to 14,700 milligrams per liter [mg/L]) in the synoptic-field study, with relative bias being nearly as great at sediment concentrations below 10 mg/L (median of -3.5 percent) as that observed at sediment concentrations up to 750 mg/L (median of -4.4 percent). This lent support to the laboratory-experiment finding that some particulate nitrogen is sequestered during the digestion process, and demonstrated that negative TN-A bias was present in samples with very low suspended-sediment concentrations. At sediment concentrations above 750 mg/L, the negative TN-A bias became more likely and larger (median of -13.2 percent), suggesting a secondary mechanism of bias, such as reagent limitation. From a geospatial perspective, trends in TN-A bias were not explained by selected basin characteristics. Though variable, TN-K bias generally was positive in the synoptic-field study (median of 3.1 percent), probably as a result of the reduction of nitrate. Three alternative approaches for assessing TN in surface water were evaluated for their impacts on existing and future sampling programs. Replacing TN-A with TN-C would remove the bias from subsequent data, but this approach also would introduce discontinuity in historical records. Replacing TN-K with TN-C would lead to the removal of positive bias in TN-K in the presence of elevated nitrate. However, in addition to the issues that may arise from a discontinuity in the data record, this approach may not be applicable to regulatory programs that require the use of total Kjeldahl nitrogen for stream assessment. By adding TN-C to existing TN-A or TN-K analyses, historical-data continuity would be preserved and the transitional period could be used to minimize the impact of bias on data analyses. This approach, however, imposes the greatest burdens on field operations and in terms of analytical costs. The variation in these impacts on different sampling programs will challenge U.S. Geological Survey scientists attempting to establish uniform standards for TN sample collection and analytical determinations.

  8. The finite element method for micro-scale modeling of ultrasound propagation in cancellous bone.

    PubMed

    Vafaeian, B; El-Rich, M; El-Bialy, T; Adeeb, S

    2014-08-01

    Quantitative ultrasound for bone assessment is based on the correlations between ultrasonic parameters and the properties (mechanical and physical) of cancellous bone. To elucidate the correlations, understanding the physics of ultrasound in cancellous bone is demanded. Micro-scale modeling of ultrasound propagation in cancellous bone using the finite-difference time-domain (FDTD) method has been so far utilized as one of the approaches in this regard. However, the FDTD method accompanies two disadvantages: staircase sampling of cancellous bone by finite difference grids leads to generation of wave artifacts at the solid-fluid interface inside the bone; additionally, this method cannot explicitly satisfy the needed perfect-slip conditions at the interface. To overcome these disadvantages, the finite element method (FEM) is proposed in this study. Three-dimensional finite element models of six water-saturated cancellous bone samples with different bone volume were created. The values of speed of sound (SOS) and broadband ultrasound attenuation (BUA) were calculated through the finite element simulations of ultrasound propagation in each sample. Comparing the results with other experimental and simulation studies demonstrated the capabilities of the FEM for micro-scale modeling of ultrasound in water-saturated cancellous bone. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Systematic bias in genomic classification due to contaminating non-neoplastic tissue in breast tumor samples.

    PubMed

    Elloumi, Fathi; Hu, Zhiyuan; Li, Yan; Parker, Joel S; Gulley, Margaret L; Amos, Keith D; Troester, Melissa A

    2011-06-30

    Genomic tests are available to predict breast cancer recurrence and to guide clinical decision making. These predictors provide recurrence risk scores along with a measure of uncertainty, usually a confidence interval. The confidence interval conveys random error and not systematic bias. Standard tumor sampling methods make this problematic, as it is common to have a substantial proportion (typically 30-50%) of a tumor sample comprised of histologically benign tissue. This "normal" tissue could represent a source of non-random error or systematic bias in genomic classification. To assess the performance characteristics of genomic classification to systematic error from normal contamination, we collected 55 tumor samples and paired tumor-adjacent normal tissue. Using genomic signatures from the tumor and paired normal, we evaluated how increasing normal contamination altered recurrence risk scores for various genomic predictors. Simulations of normal tissue contamination caused misclassification of tumors in all predictors evaluated, but different breast cancer predictors showed different types of vulnerability to normal tissue bias. While two predictors had unpredictable direction of bias (either higher or lower risk of relapse resulted from normal contamination), one signature showed predictable direction of normal tissue effects. Due to this predictable direction of effect, this signature (the PAM50) was adjusted for normal tissue contamination and these corrections improved sensitivity and negative predictive value. For all three assays quality control standards and/or appropriate bias adjustment strategies can be used to improve assay reliability. Normal tissue sampled concurrently with tumor is an important source of bias in breast genomic predictors. All genomic predictors show some sensitivity to normal tissue contamination and ideal strategies for mitigating this bias vary depending upon the particular genes and computational methods used in the predictor.

  10. A review of cognitive biases in youth depression: attention, interpretation and memory.

    PubMed

    Platt, Belinda; Waters, Allison M; Schulte-Koerne, Gerd; Engelmann, Lina; Salemink, Elske

    2017-04-01

    Depression is one of the most common mental health problems in childhood and adolescence. Although data consistently show it is associated with self-reported negative cognitive styles, less is known about the mechanisms underlying this relationship. Cognitive biases in attention, interpretation and memory represent plausible mechanisms and are known to characterise adult depression. We provide the first structured review of studies investigating the nature and causal role of cognitive biases in youth depression. Key questions are (i) do cognitive biases characterise youth depression? (ii) are cognitive biases a vulnerability factor for youth depression? and (iii) do cognitive biases play a causal role in youth depression? We find consistent evidence for positive associations between attention and interpretation biases and youth depression. Stronger biases in youth with an elevated risk of depression support cognitive-vulnerability models. Preliminary evidence from cognitive bias modification paradigms supports a causal role of attention and interpretation biases in youth depression but these paradigms require testing in clinical samples before they can be considered treatment tools. Studies of memory biases in youth samples have produced mixed findings and none have investigated the causal role of memory bias. We identify numerous areas for future research in this emerging field.

  11. Finite element analysis of the upsetting of a 5056 aluminum alloy sample with consideration of its microstructure

    NASA Astrophysics Data System (ADS)

    Voronin, S. V.; Chaplygin, K. K.

    2017-12-01

    Computer simulation of upsetting the finite element models (FEMs) of an isotropic 5056 aluminum alloy sample and a 5056 aluminum alloy sample with consideration of microstructure is carried out. The stress and strain distribution patterns at different process stages are obtained. The strain required for the deformation of the FEMs of 5056 alloy samples is determined. The influence of the material microstructure on the stress-strain behavior and technological parameters are demonstrated.

  12. Measurement and Modeling of Blocking Contacts for Cadmium Telluride Gamma Ray Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, Patrick R.

    2010-01-07

    Gamma ray detectors are important in national security applications, medicine, and astronomy. Semiconductor materials with high density and atomic number, such as Cadmium Telluride (CdTe), offer a small device footprint, but their performance is limited by noise at room temperature; however, improved device design can decrease detector noise by reducing leakage current. This thesis characterizes and models two unique Schottky devices: one with an argon ion sputter etch before Schottky contact deposition and one without. Analysis of current versus voltage characteristics shows that thermionic emission alone does not describe these devices. This analysis points to reverse bias generation current ormore » leakage through an inhomogeneous barrier. Modeling the devices in reverse bias with thermionic field emission and a leaky Schottky barrier yields good agreement with measurements. Also numerical modeling with a finite-element physics-based simulator suggests that reverse bias current is a combination of thermionic emission and generation. This thesis proposes further experiments to determine the correct model for reverse bias conduction. Understanding conduction mechanisms in these devices will help develop more reproducible contacts, reduce leakage current, and ultimately improve detector performance.« less

  13. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Guidelines for VCCT-Based Interlaminar Fatigue and Progressive Failure Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Deobald, Lyle R.; Mabson, Gerald E.; Engelstad, Steve; Prabhakar, M.; Gurvich, Mark; Seneviratne, Waruna; Perera, Shenal; O'Brien, T. Kevin; Murri, Gretchen; Ratcliffe, James; hide

    2017-01-01

    This document is intended to detail the theoretical basis, equations, references and data that are necessary to enhance the functionality of commercially available Finite Element codes, with the objective of having functionality better suited for the aerospace industry in the area of composite structural analysis. The specific area of focus will be improvements to composite interlaminar fatigue and progressive interlaminar failure. Suggestions are biased towards codes that perform interlaminar Linear Elastic Fracture Mechanics (LEFM) using Virtual Crack Closure Technique (VCCT)-based algorithms [1,2]. All aspects of the science associated with composite interlaminar crack growth are not fully developed and the codes developed to predict this mode of failure must be programmed with sufficient flexibility to accommodate new functional relationships as the science matures.

  15. Impact of Acoustic Radiation Force Excitation Geometry on Shear Wave Dispersion and Attenuation Estimates.

    PubMed

    Lipman, Samantha L; Rouze, Ned C; Palmeri, Mark L; Nightingale, Kathryn R

    2018-04-01

    Shear wave elasticity imaging (SWEI) characterizes the mechanical properties of human tissues to differentiate healthy from diseased tissue. Commercial scanners tend to reconstruct shear wave speeds for a region of interest using time-of-flight methods reporting a single shear wave speed (or elastic modulus) to the end user under the assumptions that tissue is elastic and shear wave speeds are not dependent on the frequency content of the shear waves. Human tissues, however, are known to be viscoelastic, resulting in dispersion and attenuation. Shear wave spectroscopy and spectral methods have been previously reported in the literature to quantify shear wave dispersion and attenuation, commonly making an assumption that the acoustic radiation force excitation acts as a cylindrical source with a known geometric shear wave amplitude decay. This work quantifies the bias in shear dispersion and attenuation estimates associated with making this cylindrical wave assumption when applied to shear wave sources with finite depth extents, as commonly occurs with realistic focal geometries, in elastic and viscoelastic media. Bias is quantified using analytically derived shear wave data and shear wave data generated using finite-element method models. Shear wave dispersion and attenuation bias (up to 15% for dispersion and 41% for attenuation) is greater for more tightly focused acoustic radiation force sources with smaller depths of field relative to their lateral extent (height-to-width ratios <16). Dispersion and attenuation errors associated with assuming a cylindrical geometric shear wave decay in SWEI can be appreciable and should be considered when analyzing the viscoelastic properties of tissues with acoustic radiation force source distributions with limited depths of field. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  16. Estimation and correction of visibility bias in aerial surveys of wintering ducks

    USGS Publications Warehouse

    Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.

    2008-01-01

    Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.

  17. Respondent-Driven Sampling: An Assessment of Current Methodology.

    PubMed

    Gile, Krista J; Handcock, Mark S

    2010-08-01

    Respondent-Driven Sampling (RDS) employs a variant of a link-tracing network sampling strategy to collect data from hard-to-reach populations. By tracing the links in the underlying social network, the process exploits the social structure to expand the sample and reduce its dependence on the initial (convenience) sample.The current estimators of population averages make strong assumptions in order to treat the data as a probability sample. We evaluate three critical sensitivities of the estimators: to bias induced by the initial sample, to uncontrollable features of respondent behavior, and to the without-replacement structure of sampling.Our analysis indicates: (1) that the convenience sample of seeds can induce bias, and the number of sample waves typically used in RDS is likely insufficient for the type of nodal mixing required to obtain the reputed asymptotic unbiasedness; (2) that preferential referral behavior by respondents leads to bias; (3) that when a substantial fraction of the target population is sampled the current estimators can have substantial bias.This paper sounds a cautionary note for the users of RDS. While current RDS methodology is powerful and clever, the favorable statistical properties claimed for the current estimates are shown to be heavily dependent on often unrealistic assumptions. We recommend ways to improve the methodology.

  18. Implicit and explicit weight bias in a national sample of 4,732 medical students: the medical student CHANGES study.

    PubMed

    Phelan, Sean M; Dovidio, John F; Puhl, Rebecca M; Burgess, Diana J; Nelson, David B; Yeazel, Mark W; Hardeman, Rachel; Perry, Sylvia; van Ryn, Michelle

    2014-04-01

    To examine the magnitude of explicit and implicit weight biases compared to biases against other groups; and identify student factors predicting bias in a large national sample of medical students. A web-based survey was completed by 4,732 1st year medical students from 49 medical schools as part of a longitudinal study of medical education. The survey included a validated measure of implicit weight bias, the implicit association test, and 2 measures of explicit bias: a feeling thermometer and the anti-fat attitudes test. A majority of students exhibited implicit (74%) and explicit (67%) weight bias. Implicit weight bias scores were comparable to reported bias against racial minorities. Explicit attitudes were more negative toward obese people than toward racial minorities, gays, lesbians, and poor people. In multivariate regression models, implicit and explicit weight bias was predicted by lower BMI, male sex, and non-Black race. Either implicit or explicit bias was also predicted by age, SES, country of birth, and specialty choice. Implicit and explicit weight bias is common among 1st year medical students, and varies across student factors. Future research should assess implications of biases and test interventions to reduce their impact. Copyright © 2013 The Obesity Society.

  19. When and Why Is Religious Attendance Associated With Antigay Bias and Gay Rights Opposition? A Justification-Suppression Model Approach.

    PubMed

    Hoffarth, Mark Romeo; Hodson, Gordon; Molnar, Danielle S

    2017-04-24

    Even in relatively tolerant countries, antigay bias remains socially divisive, despite being widely viewed as violating social norms of tolerance. From a Justification-Suppression Model (JSM) framework, social norms may generally suppress antigay bias in tolerant countries, yet be "released" by religious justifications among those who resist gay rights progress. Across large, nationally representative US samples (Study 1) and international samples (Study 2, representing a total of 97 different countries), over 215,000 participants, and various indicators of antigay bias (e.g., dislike, moral condemnation, opposing gay rights), individual differences in religious attendance was uniquely associated with greater antigay bias, over and above religious fundamentalism, political ideology, and religious denomination. Moreover, in 4 of 6 multilevel models, religious attendance was associated with antigay bias in countries with greater gay rights recognition, but was unrelated to antigay bias in countries with lower gay rights recognition (Study 2). In Study 3, Google searches for a religious justification ("love the sinner hate the sin") coincided temporally with gay-rights relevant searches. In U.S. (Study 4) and Canadian (Study 5) samples, much of the association between religious attendance and antigay bias was explained by "sinner-sin" religious justification, with religious attendance not associated with antigay bias when respondents reported relatively low familiarity with this justification (Study 5). These findings suggest that social divisions on homosexuality in relatively tolerant social contexts may be in large part due to religious justifications for antigay bias (consistent with the JSM), with important implications for decreasing bias. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Managing Bias in Palliative Care: Professional Hazards in Goals of Care Discussions at the End of Life.

    PubMed

    Callaghan, Katharine A; Fanning, Joseph B

    2018-02-01

    In the setting of end-of-life care, biases can interfere with patient articulation of goals and hinder provision of patient-centered care. No studies have addressed clinician bias or bias management specific to goals of care discussions at the end of life. To identify and determine the prevalence of palliative care clinician biases and bias management strategies in end-of-life goals of care discussions. A semistructured interview guide with relevant domains was developed to facilitate data collection. Participants were asked directly to identify biases and bias management strategies applicable to this setting. Two researchers developed a codebook to identify themes using a 25% transcript sample through an iterative process based on grounded theory. Inter-rater reliability was evaluated using Cohen κ. It was 0.83, indicating near perfect agreement between coders. The data approach saturation. A purposive sampling of 20 palliative care clinicians in Middle Tennessee participated in interviews. The 20 clinicians interviewed identified 16 biases and 11 bias management strategies. The most frequently mentioned bias was a bias against aggressive treatment (n = 9), described as a clinician's assumption that most interventions at the end of life are not beneficial. The most frequently mentioned bias management strategy was self-recognition of bias (n = 17), described as acknowledging that bias is present. This is the first study identifying palliative care clinicians' biases and bias management strategies in end-of-life goals of care discussions.

  1. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    PubMed

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  2. Are extreme events (statistically) special? (Invited)

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A. F.; McCloskey, J.

    2009-12-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic’, do they ‘know’ how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic’-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball’ fits to unconsciously (but wrongly in this case) assume Gaussian errors. We develop methods to correct for these effects, and show that the current best fit maximum likelihood regression model for the global frequency-moment distribution in the digital era is a power law, i.e. mega-earthquakes continue to follow the Gutenberg-Richter trend of smaller earthquakes with no (as yet) observable cut-off or characteristic extreme event. The results may also have implications for the interpretation of other time-limited geophysical time series that exhibit power-law scaling.

  3. Testing for scale-invariance in extreme events, with application to earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Main, I.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A.; McCloskey, J.

    2009-04-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic', do they ‘know' how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic'-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball' fits unconsciously (but wrongly in this case) to assume Gaussian errors. We develop methods to correct for these effects, and show that the current best fit maximum likelihood regression model for the global frequency-moment distribution in the digital era is a power law, i.e. mega-earthquakes continue to follow the Gutenberg-Richter trend of smaller earthquakes with no (as yet) observable cut-off or characteristic extreme event. The results may also have implications for the interpretation of other time-limited geophysical time series that exhibit power-law scaling.

  4. State-dependent biasing method for importance sampling in the weighted stochastic simulation algorithm.

    PubMed

    Roh, Min K; Gillespie, Dan T; Petzold, Linda R

    2010-11-07

    The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.

  5. Effect of finite particle number sampling on baryon number fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinheimer, Jan; Koch, Volker

    The effects of finite particle number sampling on the net baryon number cumulants, extracted from fluid dynamical simulations, are studied. The commonly used finite particle number sampling procedure introduces an additional Poissonian (or multinomial if global baryon number conservation is enforced) contribution which increases the extracted moments of the baryon number distribution. If this procedure is applied to a fluctuating fluid dynamics framework, one severely overestimates the actual cumulants. We show that the sampling of so-called test particles suppresses the additional contribution to the moments by at least one power of the number of test particles. We demonstrate this methodmore » in a numerical fluid dynamics simulation that includes the effects of spinodal decomposition due to a first-order phase transition. Furthermore, in the limit where antibaryons can be ignored, we derive analytic formulas which capture exactly the effect of particle sampling on the baryon number cumulants. These formulas may be used to test the various numerical particle sampling algorithms.« less

  6. Effect of finite particle number sampling on baryon number fluctuations

    DOE PAGES

    Steinheimer, Jan; Koch, Volker

    2017-09-28

    The effects of finite particle number sampling on the net baryon number cumulants, extracted from fluid dynamical simulations, are studied. The commonly used finite particle number sampling procedure introduces an additional Poissonian (or multinomial if global baryon number conservation is enforced) contribution which increases the extracted moments of the baryon number distribution. If this procedure is applied to a fluctuating fluid dynamics framework, one severely overestimates the actual cumulants. We show that the sampling of so-called test particles suppresses the additional contribution to the moments by at least one power of the number of test particles. We demonstrate this methodmore » in a numerical fluid dynamics simulation that includes the effects of spinodal decomposition due to a first-order phase transition. Furthermore, in the limit where antibaryons can be ignored, we derive analytic formulas which capture exactly the effect of particle sampling on the baryon number cumulants. These formulas may be used to test the various numerical particle sampling algorithms.« less

  7. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    PubMed Central

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  8. Publication bias in psychology: a diagnosis based on the correlation between effect size and sample size.

    PubMed

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. We found a negative correlation of r = -.45 [95% CI: -.53; -.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.

  9. Comparing State SAT Scores: Problems, Biases, and Corrections.

    ERIC Educational Resources Information Center

    Gohmann, Stephen F.

    1988-01-01

    One method to correct for selection bias in comparing Scholastic Aptitude Test (SAT) scores among states is presented, which is a modification of J. J. Heckman's Selection Bias Correction (1976, 1979). Empirical results suggest that sample selection bias is present in SAT score regressions. (SLD)

  10. Estimating the occupancy of spotted owl habitat areas by sampling and adjusting for bias

    Treesearch

    David L. Azuma; James A. Baldwin; Barry R. Noon

    1990-01-01

    A basic sampling scheme is proposed to estimate the proportion of sampled units (Spotted Owl Habitat Areas (SOHAs) or randomly sampled 1000-acre polygon areas (RSAs)) occupied by spotted owl pairs. A bias adjustment for the possibility of missing a pair given its presence on a SOHA or RSA is suggested. The sampling scheme is based on a fixed number of visits to a...

  11. Quality of volatile organic compound data from groundwater and surface water for the National Water-Quality Assessment Program, October 1996–December 2008

    USGS Publications Warehouse

    Bender, David A.; Zogorski, John S.; Mueller, David K.; Rose, Donna L.; Martin, Jeffrey D.; Brenner, Cassandra K.

    2011-01-01

    This report describes the quality of volatile organic compound (VOC) data collected from October 1996 to December 2008 from groundwater and surface-water sites for the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program. The VOC data described were collected for three NAWQA site types: (1) domestic and public-supply wells, (2) monitoring wells, and (3) surface-water sites. Contamination bias, based on the 90-percent upper confidence limit (UCL) for the 90th percentile of concentrations in field blanks, was determined for VOC samples from the three site types. A way to express this bias is that there is 90-percent confidence that this amount of contamination would be exceeded in no more than 10 percent of all samples (including environmental samples) that were collected, processed, shipped, and analyzed in the same manner as the blank samples. This report also describes how important native water rinsing may be in decreasing carryover contamination, which could be affecting field blanks. The VOCs can be classified into four contamination categories on the basis of the 90-percent upper confidence limit (90-percent UCL) concentration distribution in field blanks. Contamination category 1 includes compounds that were not detected in any field blanks. Contamination category 2 includes VOCs that have a 90-percent UCL concentration distribution in field blanks that is about an order of magnitude lower than the concentration distribution of the environmental samples. Contamination category 3 includes VOCs that have a 90-percent UCL concentration distribution in field blanks that is within an order of magnitude of the distribution in environmental samples. Contamination category 4 includes VOCs that have a 90-percent UCL concentration distribution in field blanks that is at least an order of magnitude larger than the concentration distribution of the environmental samples. Fifty-four of the 87 VOCs analyzed in samples from domestic and public-supply wells were not detected in field blanks (contamination category 1), and 33 VOC were detected in field blanks. Ten of the 33 VOCs had a 90-percent UCL concentration distribution in field blanks that was at least an order of magnitude lower than the concentration distribution in environmental samples (contamination category 2). These 10 VOCs may have had some contamination bias associated with the environmental samples, but the potential contamination bias was negligible in comparison to the environmental data; therefore, the field blanks were assumed to be representative of the sources of contamination bias affecting the environmental samples for these 10 VOCs. Seven VOCs had a 90-percent UCL concentration distribution of the field blanks that was within an order of magnitude of the concentration distribution of the environmental samples (contamination category 3). Sixteen VOCs had a 90-percent UCL concentration distribution in the field blanks that was at least an order of magnitude greater than the concentration distribution of the environmental samples (contamination category 4). Field blanks for these 16 VOCs appear to be nonrepresentative of the sources of contamination bias affecting the environmental samples because of the larger concentration distributions (and sometimes higher frequency of detection) in field blanks than in environmental samples. Forty-three of the 87 VOCs analyzed in samples from monitoring wells were not detected in field blanks (contamination category 1), and 44 VOCs were detected in field blanks. Eight of the 44 VOCs had a 90-percent UCL concentration distribution in field blanks that was at least an order of magnitude lower than concentrations in environmental samples (contamination category 2). These eight VOCs may have had some contamination bias associated with the environmental samples, but the potential contamination bias was negligible in comparison to the environmental data; therefore, the field blanks were assumed to be representative. Seven VOCs had a 90-percent UCL concentration distribution in field blanks that was of the same order of magnitude as the concentration distribution of the environmental samples (contamination category 3). Twenty-nine VOCs had a 90-percent UCL concentration distribution in the field blanks that was an order of magnitude greater than the distribution of the environmental samples (contamination category 4). Field blanks for these 29 VOCs appear to be nonrepresentative of the sources of contamination bias to the environmental samples. Fifty-four of the 87 VOCs analyzed in surface-water samples were not detected in field blanks (category 1), and 33 VOC were detected in field blanks. Sixteen of the 33 VOCs had a 90-percent UCL concentration distribution in field blanks that was at least an order of magnitude lower than the concentration distribution in environmental samples (contamination category 2). These 16 VOCs may have had some contamination bias associated with the environmental samples, but the potential contamination bias was negligible in comparison to the environmental data; therefore, the field blanks were assumed to be representative. Ten VOCs had a 90-percent UCL concentration distribution in field blanks that was similar to the concentration distribution of environmental samples (contamination category 3). Seven VOCs had a 90-percent UCL concentration distribution in the field blanks that was greater than the concentration distribution in environmental samples (contamination category 4). Field-blank samples for these seven VOCs appear to be nonrepresentative of the sources of contamination bias to the environmental samples. The relation between the detection of a compound in field blanks and the detection in subsequent environmental samples appears to be minimal. The median minimum percent effectiveness of native water rinsing is about 79 percent for the 19 VOCs detected in more than 5 percent of field blanks from all three site types. The minimum percent effectiveness of native water rinsing (10 percent) was for toluene in surface-water samples, likely because of the large detection frequency of toluene in surface-water samples (about 79 percent) and in the associated field-blank samples (46.5 percent). The VOCs that were not detected in field blanks (contamination category 1) from the three site types can be considered free of contamination bias, and various interpretations for environmental samples, such as VOC detection frequency at multiple assessment levels and comparisons of concentrations to benchmarks, are not limited for these VOCs. A censoring level for making comparisons at different assessment levels among environmental samples could be applied to concentrations of 9 VOCs in samples from domestic and public-supply wells, 16 VOCs in samples from monitoring wells, and 9 VOCs in surface-water samples to account for potential low-level contamination bias associated with these selected VOCs. Bracketing the potential contamination by comparing the detection and concentration statistics with no censoring applied to the potential for contamination bias on the basis of the 90-percent UCL for the 90th-percentile concentrations in field blanks may be useful when comparisons to benchmarks are done in a study. The VOCs that were not detected in field blanks (contamination category 1) from the three site types can be considered free of contamination bias, and various interpretations for environmental samples, such as VOC detection frequency at multiple assessment levels and comparisons of concentrations to benchmarks, are not limited for these VOCs. A censoring level for making comparisons at different assessment levels among environmental samples could be applied to concentrations of 9 VOCs in samples from domestic and public-supply wells, 16 VOCs in samples from monitoring wells, and 9 VOCs in surface-water samples to account for potential low-level contamination bias associated with these selected VOCs. Bracketing the potential contamination by comparing the detection and concentration statistics with no censoring applied to the potential for contamination bias on the basis of the 90-percent UCL for the 90th-percentile concentrations in field blanks may be useful when comparisons to benchmarks are done in a study.

  12. On utilizing alternating current-flow field effect transistor for flexibly manipulating particles in microfluidics and nanofluidics

    PubMed Central

    Liu, Weiyu; Shao, Jinyou; Ren, Yukun; Liu, Jiangwei; Tao, Ye; Jiang, Hongyuan; Ding, Yucheng

    2016-01-01

    By imposing a biased gate voltage to a center metal strip, arbitrary symmetry breaking in induced-charge electroosmotic flow occurs on the surface of this planar gate electrode, a phenomenon termed as AC-flow field effect transistor (AC-FFET). In this work, the potential of AC-FFET with a shiftable flow stagnation line to flexibly manipulate micro-nano particle samples in both a static and continuous flow condition is demonstrated via theoretical analysis and experimental validation. The effect of finite Debye length of induced double-layer and applied field frequency on the manipulating flexibility factor for static condition is investigated, which indicates AC-FFET turns out to be more effective for achieving a position-controllable concentrating of target nanoparticle samples in nanofluidics compared to the previous trial in microfluidics. Besides, a continuous microfluidics-based particle concentrator/director is developed to deal with incoming analytes in dynamic condition, which exploits a design of tandem electrode configuration to consecutively flow focus and divert incoming particle samples to a desired downstream branch channel, as prerequisite for a following biochemical analysis. Our physical demonstrations with AC-FFET prove valuable for innovative designs of flexible electrokinetic frameworks, which can be conveniently integrated with other microfluidic or nanofluidic components into a complete lab-on-chip diagnostic platform due to a simple electrode structure. PMID:27190570

  13. On utilizing alternating current-flow field effect transistor for flexibly manipulating particles in microfluidics and nanofluidics.

    PubMed

    Liu, Weiyu; Shao, Jinyou; Ren, Yukun; Liu, Jiangwei; Tao, Ye; Jiang, Hongyuan; Ding, Yucheng

    2016-05-01

    By imposing a biased gate voltage to a center metal strip, arbitrary symmetry breaking in induced-charge electroosmotic flow occurs on the surface of this planar gate electrode, a phenomenon termed as AC-flow field effect transistor (AC-FFET). In this work, the potential of AC-FFET with a shiftable flow stagnation line to flexibly manipulate micro-nano particle samples in both a static and continuous flow condition is demonstrated via theoretical analysis and experimental validation. The effect of finite Debye length of induced double-layer and applied field frequency on the manipulating flexibility factor for static condition is investigated, which indicates AC-FFET turns out to be more effective for achieving a position-controllable concentrating of target nanoparticle samples in nanofluidics compared to the previous trial in microfluidics. Besides, a continuous microfluidics-based particle concentrator/director is developed to deal with incoming analytes in dynamic condition, which exploits a design of tandem electrode configuration to consecutively flow focus and divert incoming particle samples to a desired downstream branch channel, as prerequisite for a following biochemical analysis. Our physical demonstrations with AC-FFET prove valuable for innovative designs of flexible electrokinetic frameworks, which can be conveniently integrated with other microfluidic or nanofluidic components into a complete lab-on-chip diagnostic platform due to a simple electrode structure.

  14. Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  15. Computational Dynamics of Metal-Carbon Interface-- Key to Controllable Nanotube Growth

    DTIC Science & Technology

    2013-11-13

    functionalization. 20, 21 On a finite- size particle , e.g., of radius R, the carbon nucleus has to accommodate mean curvature ~ 1/R by incorporating pentagonal...with diameter. Its length adjusting effect is not obvious at similar conditions. Yet as the precursor size increases, the bias energy also...enhance the effect of the force. Our mathematical abstraction may not precisely govern the behaviors of tubes that are not nucleated simultaneously or

  16. Growth of Finiteness in the Third Year of Life: Replication and Predictive Validity

    ERIC Educational Resources Information Center

    Hadley, Pamela A.; Rispoli, Matthew; Holt, Janet K.; Fitzgerald, Colleen; Bahnsen, Alison

    2014-01-01

    Purpose: The authors of this study investigated the validity of tense and agreement productivity (TAP) scoring in diverse sentence frames obtained during conversational language sampling as an alternative measure of finiteness for use with young children. Method: Longitudinal language samples were used to model TAP growth from 21 to 30 months of…

  17. Characteristics of bias-based harassment incidents reported by a national sample of U.S. adolescents.

    PubMed

    Jones, Lisa M; Mitchell, Kimberly J; Turner, Heather A; Ybarra, Michele L

    2018-06-01

    Using a national sample of youth from the U.S., this paper examines incidents of bias-based harassment by peers that include language about victims' perceived sexual orientation, race/ethnicity, religion, weight or height, or intelligence. Telephone interviews were conducted with youth who were 10-20 years old (n = 791). One in six youth (17%) reported at least one experience with bias-based harassment in the past year. Bias language was a part of over half (52%) of all harassment incidents experienced by youth. Perpetrators of bias-based harassment were similar demographically to perpetrators of non-biased harassment. However, bias-based incidents were more likely to involve multiple perpetrators, longer timeframes and multiple harassment episodes. Even controlling for these related characteristics, the use of bias language in incidents of peer harassment resulted in significantly greater odds that youth felt sad as a result of the victimization, skipped school, avoided school activities, and lost friends, compared to non-biased harassment incidents. Copyright © 2018 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  18. BIASES IN CASTNET FILTER PACK RESULTS ASSOCIATED WITH SAMPLING PROTOCOL

    EPA Science Inventory

    In the current study, single filter weekly (w) results are compared with weekly results aggregated from day and night (dn) weekly samples. Comparisons of the two sampling protocols for all major constituents (SO42-, NO3-, NH4+, HNO3, and SO2) show median bias (MB) of < 5 nmol m-3...

  19. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  20. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Precision pointing compensation for DSN antennas with optical distance measuring sensors

    NASA Technical Reports Server (NTRS)

    Scheid, R. E.

    1989-01-01

    The pointing control loops of Deep Space Network (DSN) antennas do not account for unmodeled deflections of the primary and secondary reflectors. As a result, structural distortions due to unpredictable environmental loads can result in uncompensated boresight shifts which degrade pointing accuracy. The design proposed here can provide real-time bias commands to the pointing control system to compensate for environmental effects on pointing performance. The bias commands can be computed in real time from optically measured deflections at a number of points on the primary and secondary reflectors. Computer simulations with a reduced-order finite-element model of a DSN antenna validate the concept and lead to a proposed design by which a ten-to-one reduction in pointing uncertainty can be achieved under nominal uncertainty conditions.

  2. Biased three-intensity decoy-state scheme on the measurement-device-independent quantum key distribution using heralded single-photon sources.

    PubMed

    Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin

    2018-02-19

    At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.

  3. Onsite Gaseous Centrifuge Enrichment Plant UF6 Cylinder Destructive Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anheier, Norman C.; Cannon, Bret D.; Qiao, Hong

    2012-07-17

    The IAEA safeguards approach for gaseous centrifuge enrichment plants (GCEPs) includes measurements of gross, partial, and bias defects in a statistical sampling plan. These safeguard methods consist principally of mass and enrichment nondestructive assay (NDA) verification. Destructive assay (DA) samples are collected from a limited number of cylinders for high precision offsite mass spectrometer analysis. DA is typically used to quantify bias defects in the GCEP material balance. Under current safeguards measures, the operator collects a DA sample from a sample tap following homogenization. The sample is collected in a small UF6 sample bottle, then sealed and shipped under IAEAmore » chain of custody to an offsite analytical laboratory. Current practice is expensive and resource intensive. We propose a new and novel approach for performing onsite gaseous UF6 DA analysis that provides rapid and accurate assessment of enrichment bias defects. DA samples are collected using a custom sampling device attached to a conventional sample tap. A few micrograms of gaseous UF6 is chemically adsorbed onto a sampling coupon in a matter of minutes. The collected DA sample is then analyzed onsite using Laser Ablation Absorption Ratio Spectrometry-Destructive Assay (LAARS-DA). DA results are determined in a matter of minutes at sufficient accuracy to support reliable bias defect conclusions, while greatly reducing DA sample volume, analysis time, and cost.« less

  4. Biases in Total Precipitable Water Vapor Climatologies from Atmospheric Infrared Sounder and Advanced Microwave Scanning Radiometer

    NASA Technical Reports Server (NTRS)

    Fetzer, Eric J.; Lambrigtsen, Bjorn H.; Eldering, Annmarie; Aumann, Hartmut H.; Chahine, Moustafa T.

    2006-01-01

    We examine differences in total precipitable water vapor (PWV) from the Atmospheric Infrared Sounder (AIRS) and the Advanced Microwave Scanning Radiometer (AMSR-E) experiments sharing the Aqua spacecraft platform. Both systems provide estimates of PWV over water surfaces. We compare AIRS and AMSR-E PWV to constrain AIRS retrieval uncertainties as functions of AIRS retrieved infrared cloud fraction. PWV differences between the two instruments vary only weakly with infrared cloud fraction up to about 70%. Maps of AIRS-AMSR-E PWV differences vary with location and season. Observational biases, when both instruments observe identical scenes, are generally less than 5%. Exceptions are in cold air outbreaks where AIRS is biased moist by 10-20% or 10-60% (depending on retrieval processing) and at high latitudes in winter where AIRS is dry by 5-10%. Sampling biases, from different sampling characteristics of AIRS and AMSR-E, vary in sign and magnitude. AIRS sampling is dry by up to 30% in most high-latitude regions but moist by 5-15% in subtropical stratus cloud belts. Over the northwest Pacific, AIRS samples conditions more moist than AMSR-E by a much as 60%. We hypothesize that both wet and dry sampling biases are due to the effects of clouds on the AIRS retrieval methodology. The sign and magnitude of these biases depend upon the types of cloud present and on the relationship between clouds and PWV. These results for PWV imply that climatologies of height-resolved water vapor from AIRS must take into consideration local meteorological processes affecting AIRS sampling.

  5. Sampling of temporal networks: Methods and biases

    NASA Astrophysics Data System (ADS)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  6. Assessment of cognitive bias in decision-making and leadership styles among critical care nurses: a mixed methods study.

    PubMed

    Lean Keng, Soon; AlQudah, Hani Nawaf Ibrahim

    2017-02-01

    To raise awareness of critical care nurses' cognitive bias in decision-making, its relationship with leadership styles and its impact on care delivery. The relationship between critical care nurses' decision-making and leadership styles in hospitals has been widely studied, but the influence of cognitive bias on decision-making and leadership styles in critical care environments remains poorly understood, particularly in Jordan. Two-phase mixed methods sequential explanatory design and grounded theory. critical care unit, Prince Hamza Hospital, Jordan. Participant sampling: convenience sampling Phase 1 (quantitative, n = 96), purposive sampling Phase 2 (qualitative, n = 20). Pilot tested quantitative survey of 96 critical care nurses in 2012. Qualitative in-depth interviews, informed by quantitative results, with 20 critical care nurses in 2013. Descriptive and simple linear regression quantitative data analyses. Thematic (constant comparative) qualitative data analysis. Quantitative - correlations found between rationality and cognitive bias, rationality and task-oriented leadership styles, cognitive bias and democratic communication styles and cognitive bias and task-oriented leadership styles. Qualitative - 'being competent', 'organizational structures', 'feeling self-confident' and 'being supported' in the work environment identified as key factors influencing critical care nurses' cognitive bias in decision-making and leadership styles. Two-way impact (strengthening and weakening) of cognitive bias in decision-making and leadership styles on critical care nurses' practice performance. There is a need to heighten critical care nurses' consciousness of cognitive bias in decision-making and leadership styles and its impact and to develop organization-level strategies to increase non-biased decision-making. © 2016 John Wiley & Sons Ltd.

  7. Randomized controlled trial of attention bias modification in a racially diverse, socially anxious, alcohol dependent sample.

    PubMed

    Clerkin, Elise M; Magee, Joshua C; Wells, Tony T; Beard, Courtney; Barnett, Nancy P

    2016-12-01

    Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Adult participants (N = 86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Randomized Controlled Trial of Attention Bias Modification in a Racially Diverse, Socially Anxious, Alcohol Dependent Sample

    PubMed Central

    Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.

    2016-01-01

    Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918

  9. A “Scientific Diversity” Intervention to Reduce Gender Bias in a Sample of Life Scientists

    PubMed Central

    Moss-Racusin, Corinne A.; van der Toorn, Jojanneke; Dovidio, John F.; Brescoll, Victoria L.; Graham, Mark J.; Handelsman, Jo

    2016-01-01

    Mounting experimental evidence suggests that subtle gender biases favoring men contribute to the underrepresentation of women in science, technology, engineering, and mathematics (STEM), including many subfields of the life sciences. However, there are relatively few evaluations of diversity interventions designed to reduce gender biases within the STEM community. Because gender biases distort the meritocratic evaluation and advancement of students, interventions targeting instructors’ biases are particularly needed. We evaluated one such intervention, a workshop called “Scientific Diversity” that was consistent with an established framework guiding the development of diversity interventions designed to reduce biases and was administered to a sample of life science instructors (N = 126) at several sessions of the National Academies Summer Institute for Undergraduate Education held nationwide. Evidence emerged indicating the efficacy of the “Scientific Diversity” workshop, such that participants were more aware of gender bias, expressed less gender bias, and were more willing to engage in actions to reduce gender bias 2 weeks after participating in the intervention compared with 2 weeks before the intervention. Implications for diversity interventions aimed at reducing gender bias and broadening the participation of women in the life sciences are discussed. PMID:27496360

  10. Forest inventory and stratified estimation: a cautionary note

    Treesearch

    John Coulston

    2008-01-01

    The Forest Inventory and Analysis (FIA) Program uses stratified estimation techniques to produce estimates of forest attributes. Stratification must be unbiased and stratification procedures should be examined to identify any potential bias. This note explains simple techniques for identifying potential bias, discriminating between sample bias and stratification bias,...

  11. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    NASA Astrophysics Data System (ADS)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  12. Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field

    NASA Astrophysics Data System (ADS)

    Constable, C.; Johnson, C. L.

    2009-05-01

    We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509

  13. Influence of growth conditions on exchange bias of NiMn-based spin valves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wienecke, Anja; Kruppe, Rahel; Rissing, Lutz

    2015-05-07

    As shown in previous investigations, a correlation between a NiMn-based spin valve's thermal stability and its inherent exchange bias exists, even if the blocking temperature of the antiferromagnet is clearly above the heating temperature and the reason for thermal degradation is mainly diffusion and not the loss of exchange bias. Samples with high exchange bias are thermally more stable than samples with low exchange bias. Those structures promoting a high exchange bias are seemingly the same suppressing thermally induced diffusion processes (A. Wienecke and L. Rissing, “Relationship between thermal stability and layer-stack/structure of NiMn-based GMR systems,” in IEEE Transaction onmore » Magnetic Conference (EMSA 2014)). Many investigations were carried out on the influence of the sputtering parameters as well as the layer thickness on the magnetoresistive effect. The influence of these parameters on the exchange bias and the sample's thermal stability, respectively, was hardly taken into account. The investigation described here concentrates on the last named issue. The focus lies on the influence of the sputtering parameters and layer thickness of the “starting layers” in the stack and the layers forming the (synthetic) antiferromagnet. This paper includes a guideline for the evaluated sputtering conditions and layer thicknesses to realize a high exchange bias and presumably good thermal stability for NiMn-based spin valves with a synthetic antiferromagnet.« less

  14. Methodological approaches in analysing observational data: A practical example on how to address clustering and selection bias.

    PubMed

    Trutschel, Diana; Palm, Rebecca; Holle, Bernhard; Simon, Michael

    2017-11-01

    Because not every scientific question on effectiveness can be answered with randomised controlled trials, research methods that minimise bias in observational studies are required. Two major concerns influence the internal validity of effect estimates: selection bias and clustering. Hence, to reduce the bias of the effect estimates, more sophisticated statistical methods are needed. To introduce statistical approaches such as propensity score matching and mixed models into representative real-world analysis and to conduct the implementation in statistical software R to reproduce the results. Additionally, the implementation in R is presented to allow the results to be reproduced. We perform a two-level analytic strategy to address the problems of bias and clustering: (i) generalised models with different abilities to adjust for dependencies are used to analyse binary data and (ii) the genetic matching and covariate adjustment methods are used to adjust for selection bias. Hence, we analyse the data from two population samples, the sample produced by the matching method and the full sample. The different analysis methods in this article present different results but still point in the same direction. In our example, the estimate of the probability of receiving a case conference is higher in the treatment group than in the control group. Both strategies, genetic matching and covariate adjustment, have their limitations but complement each other to provide the whole picture. The statistical approaches were feasible for reducing bias but were nevertheless limited by the sample used. For each study and obtained sample, the pros and cons of the different methods have to be weighted. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma

    NASA Technical Reports Server (NTRS)

    Fisher, Brad L.

    2004-01-01

    The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.

  16. The second Southern African Bird Atlas Project: Causes and consequences of geographical sampling bias.

    PubMed

    Hugo, Sanet; Altwegg, Res

    2017-09-01

    Using the Southern African Bird Atlas Project (SABAP2) as a case study, we examine the possible determinants of spatial bias in volunteer sampling effort and how well such biased data represent environmental gradients across the area covered by the atlas. For each province in South Africa, we used generalized linear mixed models to determine the combination of variables that explain spatial variation in sampling effort (number of visits per 5' × 5' grid cell, or "pentad"). The explanatory variables were distance to major road and exceptional birding locations or "sampling hubs," percentage cover of protected, urban, and cultivated area, and the climate variables mean annual precipitation, winter temperatures, and summer temperatures. Further, we used the climate variables and plant biomes to define subsets of pentads representing environmental zones across South Africa, Lesotho, and Swaziland. For each environmental zone, we quantified sampling intensity, and we assessed sampling completeness with species accumulation curves fitted to the asymptotic Lomolino model. Sampling effort was highest close to sampling hubs, major roads, urban areas, and protected areas. Cultivated area and the climate variables were less important. Further, environmental zones were not evenly represented by current data and the zones varied in the amount of sampling required representing the species that are present. SABAP2 volunteers' preferences in birding locations cause spatial bias in the dataset that should be taken into account when analyzing these data. Large parts of South Africa remain underrepresented, which may restrict the kind of ecological questions that may be addressed. However, sampling bias may be improved by directing volunteers toward undersampled regions while taking into account volunteer preferences.

  17. Running Performance, VO2max, and Running Economy: The Widespread Issue of Endogenous Selection Bias.

    PubMed

    Borgen, Nicolai T

    2018-05-01

    Studies in sport and exercise medicine routinely use samples of highly trained individuals in order to understand what characterizes elite endurance performance, such as running economy and maximal oxygen uptake VO 2max . However, it is not well understood in the literature that using such samples most certainly leads to biased findings and accordingly potentially erroneous conclusions because of endogenous selection bias. In this paper, I review the current literature on running economy and VO 2max , and discuss the literature in light of endogenous selection bias. I demonstrate that the results in a large part of the literature may be misleading, and provide some practical suggestions as to how future studies may alleviate endogenous selection bias.

  18. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation

    NASA Astrophysics Data System (ADS)

    Peter, Emanuel K.

    2017-12-01

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  19. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation.

    PubMed

    Peter, Emanuel K

    2017-12-07

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  20. Modification of cognitive biases related to posttraumatic stress: A systematic review and research agenda.

    PubMed

    Woud, Marcella L; Verwoerd, Johan; Krans, Julie

    2017-06-01

    Cognitive models of Posttraumatic Stress Disorder (PTSD) postulate that cognitive biases in attention, interpretation, and memory represent key factors involved in the onset and maintenance of PTSD. Developments in experimental research demonstrate that it may be possible to manipulate such biases by means of Cognitive Bias Modification (CBM). In the present paper, we summarize studies assessing cognitive biases in posttraumatic stress to serve as a theoretical and methodological background. However, our main aim was to provide an overview of the scientific literature on CBM in (analogue) posttraumatic stress. Results of our systematic literature review showed that most CBM studies targeted attentional and interpretation biases (attention: five studies; interpretation: three studies), and one study modified memory biases. Overall, results showed that CBM can indeed modify cognitive biases and affect (analog) trauma symptoms in a training congruent manner. Interpretation bias procedures seemed effective in analog samples, and memory bias training proved preliminary success in a clinical PTSD sample. Studies of attention bias modification provided more mixed results. This heterogeneous picture may be explained by differences in the type of population or variations in the CBM procedure. Therefore, we sketched a detailed research agenda targeting the challenges for CBM in posttraumatic stress. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Optimal weighting in fNL constraints from large scale structure in an idealised case

    NASA Astrophysics Data System (ADS)

    Slosar, Anže

    2009-03-01

    We consider the problem of optimal weighting of tracers of structure for the purpose of constraining the non-Gaussianity parameter fNL. We work within the Fisher matrix formalism expanded around fiducial model with fNL = 0 and make several simplifying assumptions. By slicing a general sample into infinitely many samples with different biases, we derive the analytic expression for the relevant Fisher matrix element. We next consider weighting schemes that construct two effective samples from a single sample of tracers with a continuously varying bias. We show that a particularly simple ansatz for weighting functions can recover all information about fNL in the initial sample that is recoverable using a given bias observable and that simple division into two equal samples is considerably suboptimal when sampling of modes is good, but only marginally suboptimal in the limit where Poisson errors dominate.

  2. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    ERIC Educational Resources Information Center

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  3. Psychopysics of Remembering: To Bias or Not to Bias?

    ERIC Educational Resources Information Center

    White, K. Geoffrey; Wixted, John T.

    2010-01-01

    Delayed matching to sample is typically a two-alternative forced-choice procedure with two sample stimuli. In this task the effects of varying the probability of reinforcers for correct choices and the resulting receiver operating characteristic are symmetrical. A version of the task where a sample is present on some trials and absent on others is…

  4. State-Space Modeling of Dynamic Psychological Processes via the Kalman Smoother Algorithm: Rationale, Finite Sample Properties, and Applications

    ERIC Educational Resources Information Center

    Song, Hairong; Ferrer, Emilio

    2009-01-01

    This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…

  5. Source Biases in Magnetotelluric Transfer Functions due to Pc3/Pc4 ( 10-100s) Geomagnetic Activity at Mid-Latitudes

    NASA Astrophysics Data System (ADS)

    Murphy, B. S.; Egbert, G. D.

    2017-12-01

    Discussion of possible bias in magnetotelluric (MT) transfer functions due to the finite spatial scale of external source fields has largely focused on long periods (>1000 s), where skin depths are large, and high latitudes (>60° N), where sources are dominated by narrow electrojets. However, a significant fraction ( 15%) of the 1000 EarthScope USArray apparent resistivity and phase curves exhibit nonphysical "humps" over a narrow period range (typically between 25-60 s) that are suggestive of narrow-band source effects. Maps of locations in the US where these biases are seen support this conclusion: they mostly occur in places where the Earth is highly resistive, such as cratonic regions, where skin depths are largest and hence where susceptibility to bias from short-wavelength sources would be greatest. We have analyzed EarthScope MT time series using cross-phase techniques developed in the space physics community to measure the period of local field line resonances associated with geomagnetic pulsations (Pc's). In most cases the biases occur near the periods of field line resonance determined from this analysis, suggesting that at mid-latitude ( 30°-50° N) Pc's can bias the time-averaged MT transfer functions. Because Pc's have short meridional wavelengths (hundreds of km), even at these relatively short periods the plane-wave assumption of the MT technique may be violated, at least in resistive domains with large skin depths. It is unclear if these biases (generally small) are problematic for MT data inversion, but their presence in the transfer functions is already a useful zeroth-order indicator of resistive regions of the Earth.

  6. Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach

    NASA Technical Reports Server (NTRS)

    Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.

  7. A computer graphics program for general finite element analyses

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Sawyer, L. M.

    1978-01-01

    Documentation for a computer graphics program for displays from general finite element analyses is presented. A general description of display options and detailed user instructions are given. Several plots made in structural, thermal and fluid finite element analyses are included to illustrate program options. Sample data files are given to illustrate use of the program.

  8. Further validation of the MMPI-2 and MMPI-2-RF Response Bias Scale: findings from disability and criminal forensic settings.

    PubMed

    Wygant, Dustin B; Sellbom, Martin; Gervais, Roger O; Ben-Porath, Yossef S; Stafford, Kathleen P; Freeman, David B; Heilbronner, Robert L

    2010-12-01

    The present study extends the validation of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) Response Bias Scale (RBS; R. O. Gervais, Y. S. Ben-Porath, D. B. Wygant, & P. Green, 2007) in separate forensic samples composed of disability claimants and criminal defendants. Using cognitive symptom validity tests as response bias indicators, the RBS exhibited large effect sizes (Cohen's ds = 1.24 and 1.48) in detecting cognitive response bias in the disability and criminal forensic samples, respectively. The scale also added incremental prediction to the traditional MMPI-2 and the MMPI-2-RF overreporting validity scales in the disability sample and exhibited excellent specificity with acceptable sensitivity at cutoffs ranging from 90T to 120T. The results of this study indicate that the RBS can add uniquely to the existing MMPI-2 and MMPI-2-RF validity scales in detecting symptom exaggeration associated with cognitive response bias.

  9. Bias due to Preanalytical Dilution of Rodent Serum for Biochemical Analysis on the Siemens Dimension Xpand Plus

    PubMed Central

    Johns, Jennifer L.; Moorhead, Kaitlin A.; Hu, Jing; Moorhead, Roberta C.

    2018-01-01

    Clinical pathology testing of rodents is often challenging due to insufficient sample volume. One solution in clinical veterinary and exploratory research environments is dilution of samples prior to analysis. However, published information on the impact of preanalytical sample dilution on rodent biochemical data is incomplete. The objective of this study was to evaluate the effects of preanalytical sample dilution on biochemical analysis of mouse and rat serum samples utilizing the Siemens Dimension Xpand Plus. Rats were obtained from end of study research projects. Mice were obtained from sentinel testing programs. For both, whole blood was collected via terminal cardiocentesis into empty tubes and serum was harvested. Biochemical parameters were measured on fresh and thawed frozen samples run straight and at dilution factors 2–10. Dilutions were performed manually, utilizing either ultrapure water or enzyme diluent per manufacturer recommendations. All diluted samples were generated directly from the undiluted sample. Preanalytical dilution caused clinically unacceptable bias in most analytes at dilution factors four and above. Dilution-induced bias in total calcium, creatinine, total bilirubin, and uric acid was considered unacceptable with any degree of dilution, based on the more conservative of two definitions of acceptability. Dilution often caused electrolyte values to fall below assay range precluding evaluation of bias. Dilution-induced bias occurred in most biochemical parameters to varying degrees and may render dilution unacceptable in the exploratory research and clinical veterinary environments. Additionally, differences between results obtained at different dilution factors may confound statistical comparisons in research settings. Comparison of data obtained at a single dilution factor is highly recommended. PMID:29497614

  10. Mixed Model Association with Family-Biased Case-Control Ascertainment.

    PubMed

    Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L

    2017-01-05

    Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  11. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    PubMed

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to modeling resource selection is easily implemented using common statistical tools and promises to provide deeper insight into the movement ecology of animals.

  12. Expectancy bias in anxious samples

    PubMed Central

    Cabeleira, Cindy M.; Steinman, Shari A.; Burgess, Melissa M.; Bucks, Romola S.; MacLeod, Colin; Melo, Wilson; Teachman, Bethany A.

    2014-01-01

    While it is well documented that anxious individuals have negative expectations about the future, it is unclear what cognitive processes give rise to this expectancy bias. Two studies are reported that use the Expectancy Task, which is designed to assess expectancy bias and illuminate its basis. This task presents individuals with valenced scenarios (Positive Valence, Negative Valence, or Conflicting Valence), and then evaluates their tendency to expect subsequent future positive relative to negative events. The Expectancy Task was used with low and high trait anxious (Study 1: N = 32) and anxiety sensitive (Study 2: N = 138) individuals. Results suggest that in the context of physical concerns, both high anxious samples display a less positive expectancy bias. In the context of social concerns, high trait anxious individuals display a negative expectancy bias only when negatively valenced information was previously presented. Overall, this suggests that anxious individuals display a less positive expectancy bias, and that the processes that give rise to this bias may vary by type of situation (e.g., social or physical) or anxiety difficulty. PMID:24798678

  13. Reducing inherent biases introduced during DNA viral metagenome analyses of municipal wastewater

    EPA Science Inventory

    Metagenomics is a powerful tool for characterizing viral composition within environmental samples, but sample and molecular processing steps can bias the estimation of viral community structure. The objective of this study is to understand the inherent variability introduced when...

  14. Rater Perceptions of Bias Using the Multiple Mini-Interview Format: A Qualitative Study

    ERIC Educational Resources Information Center

    Alweis, Richard L.; Fitzpatrick, Caroline; Donato, Anthony A.

    2015-01-01

    Introduction: The Multiple Mini-Interview (MMI) format appears to mitigate individual rater biases. However, the format itself may introduce structural systematic bias, favoring extroverted personality types. This study aimed to gain a better understanding of these biases from the perspective of the interviewer. Methods: A sample of MMI…

  15. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  16. The impact of non-response bias due to sampling in public health studies: A comparison of voluntary versus mandatory recruitment in a Dutch national survey on adolescent health.

    PubMed

    Cheung, Kei Long; Ten Klooster, Peter M; Smit, Cees; de Vries, Hein; Pieterse, Marcel E

    2017-03-23

    In public health monitoring of young people it is critical to understand the effects of selective non-response, in particular when a controversial topic is involved like substance abuse or sexual behaviour. Research that is dependent upon voluntary subject participation is particularly vulnerable to sampling bias. As respondents whose participation is hardest to elicit on a voluntary basis are also more likely to report risk behaviour, this potentially leads to underestimation of risk factor prevalence. Inviting adolescents to participate in a home-sent postal survey is a typical voluntary recruitment strategy with high non-response, as opposed to mandatory participation during school time. This study examines the extent to which prevalence estimates of adolescent health-related characteristics are biased due to different sampling methods, and whether this also biases within-subject analyses. Cross-sectional datasets collected in 2011 in Twente and IJsselland, two similar and adjacent regions in the Netherlands, were used. In total, 9360 youngsters in a mandatory sample (Twente) and 1952 youngsters in a voluntary sample (IJsselland) participated in the study. To test whether the samples differed on health-related variables, we conducted both univariate and multivariable logistic regression analyses controlling for any demographic difference between the samples. Additional multivariable logistic regressions were conducted to examine moderating effects of sampling method on associations between health-related variables. As expected, females, older individuals, as well as individuals with higher education levels, were over-represented in the voluntary sample, compared to the mandatory sample. Respondents in the voluntary sample tended to smoke less, consume less alcohol (ever, lifetime, and past four weeks), have better mental health, have better subjective health status, have more positive school experiences and have less sexual intercourse than respondents in the mandatory sample. No moderating effects were found for sampling method on associations between variables. This is one of first studies to provide strong evidence that voluntary recruitment may lead to a strong non-response bias in health-related prevalence estimates in adolescents, as compared to mandatory recruitment. The resulting underestimation in prevalence of health behaviours and well-being measures appeared large, up to a four-fold lower proportion for self-reported alcohol consumption. Correlations between variables, though, appeared to be insensitive to sampling bias.

  17. Associations among selective attention, memory bias, cognitive errors and symptoms of anxiety in youth.

    PubMed

    Watts, Sarah E; Weems, Carl F

    2006-12-01

    The purpose of this study was to examine the linkages among selective attention, memory bias, cognitive errors, and anxiety problems by testing a model of the interrelations among these cognitive variables and childhood anxiety disorder symptoms. A community sample of 81 youth (38 females and 43 males) aged 9-17 years and their parents completed measures of the child's anxiety disorder symptoms. Youth completed assessments measuring selective attention, memory bias, and cognitive errors. Results indicated that selective attention, memory bias, and cognitive errors were each correlated with childhood anxiety problems and provide support for a cognitive model of anxiety which posits that these three biases are associated with childhood anxiety problems. Only limited support for significant interrelations among selective attention, memory bias, and cognitive errors was found. Finally, results point towards an effective strategy for moving the assessment of selective attention to younger and community samples of youth.

  18. Growing cell-phone population and noncoverage bias in traditional random digit dial telephone health surveys.

    PubMed

    Lee, Sunghee; Brick, J Michael; Brown, E Richard; Grant, David

    2010-08-01

    Examine the effect of including cell-phone numbers in a traditional landline random digit dial (RDD) telephone survey. The 2007 California Health Interview Survey (CHIS). CHIS 2007 is an RDD telephone survey supplementing a landline sample in California with a sample of cell-only (CO) adults. We examined the degree of bias due to exclusion of CO populations and compared a series of demographic and health-related characteristics by telephone usage. When adjusted for noncoverage in the landline sample through weighting, the potential noncoverage bias due to excluding CO adults in landline telephone surveys is diminished. Both CO adults and adults who have both landline and cell phones but mostly use cell phones appear different from other telephone usage groups. Controlling for demographic differences did not attenuate the significant distinctiveness of cell-mostly adults. While careful weighting can mitigate noncoverage bias in landline telephone surveys, the rapid growth of cell-phone population and their distinctive characteristics suggest it is important to include a cell-phone sample. Moreover, the threat of noncoverage bias in telephone health survey estimates could mislead policy makers with possibly serious consequences for their ability to address important health policy issues.

  19. Estimation and modeling of electrofishing capture efficiency for fishes in wadeable warmwater streams

    USGS Publications Warehouse

    Price, A.; Peterson, James T.

    2010-01-01

    Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.

  20. Enhanced conformational sampling using replica exchange with concurrent solute scaling and hamiltonian biasing realized in one dimension.

    PubMed

    Yang, Mingjun; Huang, Jing; MacKerell, Alexander D

    2015-06-09

    Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.

  1. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    USGS Publications Warehouse

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  2. Uncertainty Estimation for the Determination of Ni, Pb and Al in Natural Water Samples by SPE-ICP-OES

    NASA Astrophysics Data System (ADS)

    Ghorbani, A.; Farahani, M. Mahmoodi; Rabbani, M.; Aflaki, F.; Waqifhosain, Syed

    2008-01-01

    In this paper we propose uncertainty estimation for the analytical results we obtained from determination of Ni, Pb and Al by solidphase extraction and inductively coupled plasma optical emission spectrometry (SPE-ICP-OES). The procedure is based on the retention of analytes in the form of 8-hydroxyquinoline (8-HQ) complexes on a mini column of XAD-4 resin and subsequent elution with nitric acid. The influence of various analytical parameters including the amount of solid phase, pH, elution factors (concentration and volume of eluting solution), volume of sample solution, and amount of ligand on the extraction efficiency of analytes was investigated. To estimate the uncertainty of analytical result obtained, we propose assessing trueness by employing spiked sample. Two types of bias are calculated in the assessment of trueness: a proportional bias and a constant bias. We applied Nested design for calculating proportional bias and Youden method to calculate the constant bias. The results we obtained for proportional bias are calculated from spiked samples. In this case, the concentration found is plotted against the concentration added and the slop of standard addition curve is an estimate of the method recovery. Estimated method of average recovery in Karaj river water is: (1.004±0.0085) for Ni, (0.999±0.010) for Pb and (0.987±0.008) for Al.

  3. Association between attention bias to threat and anxiety symptoms in children and adolescents.

    PubMed

    Abend, Rany; de Voogd, Leone; Salemink, Elske; Wiers, Reinout W; Pérez-Edgar, Koraly; Fitzgerald, Amanda; White, Lauren K; Salum, Giovanni A; He, Jie; Silverman, Wendy K; Pettit, Jeremy W; Pine, Daniel S; Bar-Haim, Yair

    2018-03-01

    Considerable research links threat-related attention biases to anxiety symptoms in adults, whereas extant findings on threat biases in youth are limited and mixed. Inconsistent findings may arise due to substantial methodological variability and limited sample sizes, emphasizing the need for systematic research on large samples. The aim of this report is to examine the association between threat bias and pediatric anxiety symptoms using standardized measures in a large, international, multi-site youth sample. A total of 1,291 children and adolescents from seven research sites worldwide completed standardized attention bias assessment task (dot-probe task) and child anxiety symptoms measure (Screen for Child Anxiety Related Emotional Disorders). Using a dimensional approach to symptomatology, we conducted regression analyses predicting overall, and disorder-specific, anxiety symptoms severity, based on threat bias scores. Threat bias correlated positively with overall anxiety symptoms severity (ß = 0.078, P = .004). Furthermore, threat bias was positively associated specifically with social anxiety (ß = 0.072, P = .008) and school phobia (ß = 0.076, P = .006) symptoms severity, but not with panic, generalized anxiety, or separation anxiety symptoms. These associations were not moderated by age or gender. These findings indicate associations between threat bias and pediatric anxiety symptoms, and suggest that vigilance to external threats manifests more prominently in symptoms of social anxiety and school phobia, regardless of age and gender. These findings point to the role of attention bias to threat in anxiety, with implications for translational clinical research. The significance of applying standardized methods in multi-site collaborations for overcoming challenges inherent to clinical research is discussed. © 2017 Wiley Periodicals, Inc.

  4. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  5. A remark on the theory of measuring thermal diffusivity by the modified Angstrom's method. [in lunar samples

    NASA Technical Reports Server (NTRS)

    Horai, K.-I.

    1981-01-01

    A theory of the measurement of the thermal diffusivity of a sample by the modified Angstrom method is developed for the case in which radiative heat loss from the end surface of the sample is not negligible, and applied to measurements performed on lunar samples. Formulas allowing sample thermal diffusivity to be determined from the amplitude decay and phase lag of a temperature wave traveling through the sample are derived for a flat disk sample for which only heat loss from the end surface is important, and a sample of finite diameter and length for which heat loss through the end and side surfaces must be considered. It is noted that in the case of a flat disk, measurements at a single angular frequency of the temperature wave are sufficient, while the sample of finite diameter and length requires measurements at two discrete angular frequencies. Comparison of the values of the thermal diffusivities of two lunar samples of dimensions approximately 1 x 1 x 2 cm derived by the present methods and by the Angstrom theory for a finite bar reveals them to differ by not more than 5%, and indicates that more refined data are required as the measurement theory becomes more complicated.

  6. Adaptive enhanced sampling by force-biasing using neural networks

    NASA Astrophysics Data System (ADS)

    Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.

    2018-04-01

    A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.

  7. Differences in Preschool Children's Conceptual Strategies When Thinking about Animate Entities and Artifacts.

    ERIC Educational Resources Information Center

    Blanchet, Nicole; Dunham, Philip J.; Dunham, Frances

    2001-01-01

    Preschoolers viewed stimulus sets comprised of a sample picture and three types of matches and were asked to choose a match that "went with" each sample. Children's choices indicated that a shift occurs between 3 and 4 years of age from a taxonomic bias to a thematic bias. Animate sample stimuli enhanced children's tendency to adopt…

  8. Design of the sample cell in near-field surface-enhanced Raman scattering by finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Li, Yaqin; Jian, Guoshu; Wu, Shifa

    2006-11-01

    The rational design of the sample cell may improve the sensitivity of surface-enhanced Raman scattering (SERS) detection in a high degree. Finite difference time domain (FDTD) simulations of the configuration of Ag film-Ag particles illuminated by plane wave and evanescent wave are performed to provide physical insight for design of the sample cell. Numerical solutions indicate that the sample cell can provide more "hot spots' and the massive field intensity enhancement occurs in these "hot spots'. More information on the nanometer character of the sample can be got because of gradient-field Raman (GFR) of evanescent wave.

  9. Experimental measurement of the plasma conductivity of Z93 and Z93P thermal control paint

    NASA Technical Reports Server (NTRS)

    Hillard, G. Barry

    1993-01-01

    Two samples each of Z93 and Z93P thermal control paint were exposed to a simulated space environment in a plasma chamber. The samples were biased through a series of voltages ranging from -200 volts to +300 volts and electron and ion currents measured. By comparing the currents to those of pure metal samples of the same size and shape, the conductivity of the samples was calculated. Measured conductivity was dependent on the bias potential in all cases. For Z93P, conductivity was approximately constant over much of the bias range and we find a value of 0.5 micro-mhos per square meter for both electron and ion current. For Z93, the dependence on bias was much more pronounced but conductivity can be said to be approximately one order of magnitude larger. In addition to presenting these results, this report documents all of the experimental data as well as the statistical analyses performed.

  10. Nonequilibrium Kondo effect by the equilibrium numerical renormalization group method: The hybrid Anderson model subject to a finite spin bias

    NASA Astrophysics Data System (ADS)

    Fang, Tie-Feng; Guo, Ai-Min; Sun, Qing-Feng

    2018-06-01

    We investigate Kondo correlations in a quantum dot with normal and superconducting electrodes, where a spin bias voltage is applied across the device and the local interaction U is either attractive or repulsive. When the spin current is blockaded in the large-gap regime, this nonequilibrium strongly correlated problem maps into an equilibrium model solvable by the numerical renormalization group method. The Kondo spectra with characteristic splitting due to the nonequilibrium spin accumulation are thus obtained at high precision. It is shown that while the bias-induced decoherence of the spin Kondo effect is partially compensated by the superconductivity, the charge Kondo effect is enhanced out of equilibrium and undergoes an additional splitting by the superconducting proximity effect, yielding four Kondo peaks in the local spectral density. In the charge Kondo regime, we find a universal scaling of charge conductance in this hybrid device under different spin biases. The universal conductance as a function of the coupling to the superconducting lead is peaked at and hence directly measures the Kondo temperature. Our results are of direct relevance to recent experiments realizing a negative-U charge Kondo effect in hybrid oxide quantum dots [Nat. Commun. 8, 395 (2017), 10.1038/s41467-017-00495-7].

  11. Steady-state and quench-dependent relaxation of a quantum dot coupled to one-dimensional leads

    NASA Astrophysics Data System (ADS)

    Nuss, Martin; Ganahl, Martin; Evertz, Hans Gerd; Arrigoni, Enrico; von der Linden, Wolfgang

    2013-07-01

    We study the time evolution and steady state of the charge current in a single-impurity Anderson model, using matrix product states techniques. A nonequilibrium situation is imposed by applying a bias voltage across one-dimensional tight-binding leads. Focusing on particle-hole symmetry, we extract current-voltage characteristics from universal low-bias up to high-bias regimes, where band effects start to play a dominant role. We discuss three quenches, which after strongly quench-dependent transients yield the same steady-state current. Among these quenches we identify those favorable for extracting steady-state observables. The period of short-time oscillations is shown to compare well to real-time renormalization group results for a simpler model of spinless fermions. We find indications that many-body effects play an important role at high-bias voltage and finite bandwidth of the metallic leads. The growth of entanglement entropy after a certain time scale ∝Δ-1 is the major limiting factor for calculating the time evolution. We show that the magnitude of the steady-state current positively correlates with entanglement entropy. The role of high-energy states for the steady-state current is explored by considering a damping term in the time evolution.

  12. Threat-Related Attention Bias Variability and Posttraumatic Stress.

    PubMed

    Naim, Reut; Abend, Rany; Wald, Ilan; Eldar, Sharon; Levi, Ofir; Fruchter, Eyal; Ginat, Karen; Halpern, Pinchas; Sipos, Maurice L; Adler, Amy B; Bliese, Paul D; Quartana, Phillip J; Pine, Daniel S; Bar-Haim, Yair

    2015-12-01

    Threat monitoring facilitates survival by allowing one to efficiently and accurately detect potential threats. Traumatic events can disrupt healthy threat monitoring, inducing biased and unstable threat-related attention deployment. Recent research suggests that greater attention bias variability, that is, attention fluctuations alternating toward and away from threat, occurs in participants with PTSD relative to healthy comparison subjects who were either exposed or not exposed to traumatic events. The current study extends findings on attention bias variability in PTSD. Previous measurement of attention bias variability was refined by employing a moving average technique. Analyses were conducted across seven independent data sets; in each, data on attention bias variability were collected by using variants of the dot-probe task. Trauma-related and anxiety symptoms were evaluated across samples by using structured psychiatric interviews and widely used self-report questionnaires, as specified for each sample. Analyses revealed consistent evidence of greater attention bias variability in patients with PTSD following various types of traumatic events than in healthy participants, participants with social anxiety disorder, and participants with acute stress disorder. Moreover, threat-related, and not positive, attention bias variability was correlated with PTSD severity. These findings carry possibilities for using attention bias variability as a specific cognitive marker of PTSD and for tailoring protocols for attention bias modification for this disorder.

  13. Directional asymmetry of pelvic vestiges in threespine stickleback.

    PubMed

    Bell, Michael A; Khalef, Victoria; Travis, Matthew P

    2007-03-15

    Extensive reduction of the size and complexity of the pelvic skeleton (i.e., pelvic reduction) has evolved repeatedly in Gasterosteus aculeatus. Asymmetrical pelvic vestiges tend to be larger on the left side (i.e., left biased) in populations studied previously. Loss of Pitx1 expression is associated with pelvic reduction in G. aculeatus, and pelvic reduction maps to the Pitx1 locus. Pitx1 knockouts in mice have reduced hind limbs, but the left limb is larger. Thus left-biased directional asymmetry of stickleback pelvic vestiges may indicate the involvement of Pitx1 in pelvic reduction. We examined 6,356 specimens from 27 Cook Inlet populations of G. aculeatus with extensive pelvic reduction. Samples from 20 populations exhibit the left bias in asymmetrical pelvic vestiges expected if Pitx1 is involved, and three have a slight, non-significant left bias. However, samples from three populations have a significant right bias, and one large sample from another population has equal frequencies of specimens with larger vestiges on the left or right side. A sample of fossil threespine stickleback also has significantly left-biased pelvic vestiges. These results suggest that silencing of Pitx1 or the developmental pathway in which it functions in the pelvis is the usual cause of pelvic reduction in most Cook Inlet populations of G. aculeatu, and that it caused pelvic reduction at least 10 million years ago in a stickleback population. A different developmental genetic mechanism is implicated for three populations with right-biased pelvic vestiges and for the population without directional asymmetry. (c) 2006 Wiley-Liss, Inc.

  14. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    PubMed

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  15. Jackknife Estimation of Sampling Variance of Ratio Estimators in Complex Samples: Bias and the Coefficient of Variation. Research Report. ETS RR-06-19

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…

  16. Constructing a multidimensional free energy surface like a spider weaving a web.

    PubMed

    Chen, Changjun

    2017-10-15

    Complete free energy surface in the collective variable space provides important information of the reaction mechanisms of the molecules. But, sufficient sampling in the collective variable space is not easy. The space expands quickly with the number of the collective variables. To solve the problem, many methods utilize artificial biasing potentials to flatten out the original free energy surface of the molecule in the simulation. Their performances are sensitive to the definitions of the biasing potentials. Fast-growing biasing potential accelerates the sampling speed but decreases the accuracy of the free energy result. Slow-growing biasing potential gives an optimized result but needs more simulation time. In this article, we propose an alternative method. It adds the biasing potential to a representative point of the molecule in the collective variable space to improve the conformational sampling. And the free energy surface is calculated from the free energy gradient in the constrained simulation, not given by the negative of the biasing potential as previous methods. So the presented method does not require the biasing potential to remove all the barriers and basins on the free energy surface exactly. Practical applications show that the method in this work is able to produce the accurate free energy surfaces for different molecules in a short time period. The free energy errors are small in the cases of various biasing potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. A cautionary note on substituting spatial subunits for repeated temporal sampling in studies of site occupancy

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.

    2009-01-01

    1. Assessing the probability that a given site is occupied by a species of interest is important to resource managers, as well as metapopulation or landscape ecologists. Managers require accurate estimates of the state of the system, in order to make informed decisions. Models that yield estimates of occupancy, while accounting for imperfect detection, have proven useful by removing a potentially important source of bias. To account for detection probability, multiple independent searches per site for the species are required, under the assumption that the species is available for detection during each search of an occupied site. 2. We demonstrate that when multiple samples per site are defined by searching different locations within a site, absence of the species from a subset of these spatial subunits induces estimation bias when locations are exhaustively assessed or sampled without replacement. 3. We further demonstrate that this bias can be removed by choosing sampling locations with replacement, or if the species is highly mobile over a short period of time. 4. Resampling an existing data set does not mitigate bias due to exhaustive assessment of locations or sampling without replacement. 5. Synthesis and applications. Selecting sampling locations for presence/absence surveys with replacement is practical in most cases. Such an adjustment to field methods will prevent one source of bias, and therefore produce more robust statistical inferences about species occupancy. This will in turn permit managers to make resource decisions based on better knowledge of the state of the system.

  18. Eight Year Climatologies from Observational (AIRS) and Model (MERRA) Data

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas; Savtchenko, Andrey; Won, Young-In; Theobalk, Mike; Vollmer, Bruce; Manning, Evan; Smith, Peter; Ostrenga, Dana; Leptoukh, Greg

    2010-01-01

    We examine climatologies derived from eight years of temperature, water vapor, cloud, and trace gas observations made by the Atmospheric Infrared Sounder (AIRS) instrument flying on the Aqua satellite and compare them to similar climatologies constructed with data from a global assimilation model, the Modern Era Retrospective-Analysis for Research and Applications (MERRA). We use the AIRS climatologies to examine anomalies and trends in the AIRS data record. Since sampling can be an issue for infrared satellites in low earth orbit, we also use the MERRA data to examine the AIRS sampling biases. By sampling the MERRA data at the AIRS space-time locations both with and without the AIRS quality control we estimate the sampling bias of the AIRS climatology and the atmospheric conditions where AIRS has a lower sampling rate. While the AIRS temperature and water vapor sampling biases are small at low latitudes, they can be more than a few degrees in temperature or 10 percent in water vapor at higher latitudes. The largest sampling biases are over desert. The AIRS and MERRA data are available from the Goddard Earth Sciences Data and Information Services Center (GES DISC). The AIRS climatologies we used are available for analysis with the GIOVANNI data exploration tool. (see, http://disc.gsfc.nasa.gov).

  19. Attention Bias toward Threat in Pediatric Anxiety Disorders

    ERIC Educational Resources Information Center

    Roy, Amy Krain; Vasa, Roma A.; Bruck, Maggie; Mogg, Karin; Bradley, Brendan P.; Sweeney, Michael; Bergman, R. Lindsey; McClure-Tone, Erin B.; Pine, Daniel S.

    2008-01-01

    Attention bias towards threat faces is examined for a large sample of anxiety-disordered youths using visual probe task. The results showed that anxious individuals showed a selective bias towards threat due to perturbation in neural mechanisms that control vigilance.

  20. Origin of tensile strength of a woven sample cut in bias directions

    PubMed Central

    Pan, Ning; Kovar, Radko; Dolatabadi, Mehdi Kamali; Wang, Ping; Zhang, Diantang; Sun, Ying; Chen, Li

    2015-01-01

    Textile fabrics are highly anisotropic, so that their mechanical properties including strengths are a function of direction. An extreme case is when a woven fabric sample is cut in such a way where the bias angle and hence the tension loading direction is around 45° relative to the principal directions. Then, once loaded, no yarn in the sample is held at both ends, so the yarns have to build up their internal tension entirely via yarn–yarn friction at the interlacing points. The overall fabric strength in such a sample is a result of contributions from the yarns being pulled out and those broken during the process, and thus becomes a function of the bias direction angle θ, sample width W and length L, along with other factors known to affect fabric strength tested in principal directions. Furthermore, in such a bias sample when the major parameters, e.g. the sample width W, change, not only the resultant strengths differ, but also the strength generating mechanisms (or failure types) vary. This is an interesting problem and is analysed in this study. More specifically, the issues examined in this paper include the exact mechanisms and details of how each interlacing point imparts the frictional constraint for a yarn to acquire tension to the level of its strength when both yarn ends were not actively held by the testing grips; the theoretical expression of the critical yarn length for a yarn to be able to break rather than be pulled out, as a function of the related factors; and the general relations between the tensile strength of such a bias sample and its structural properties. At the end, theoretical predictions are compared with our experimental data. PMID:26064655

  1. Racial and Ethnic Bias in Test Construction. Final Report.

    ERIC Educational Resources Information Center

    Green, Donald Ross

    To determine if tryout samples typically used for item selection contribute to test bias against minority groups, item analyses were made of the California Achievement Tests using seven subgroups of the standardization sample: Northern White Suburban, Northern Black Urban, Southern White Suburban, Southern Black Rural, Southern White Rural,…

  2. Racial and Ethnic Bias in Test Construction.

    ERIC Educational Resources Information Center

    Green, Donald Ross

    To determine if tryout samples typically used for item selection contribute to test bias against minority groups, item analyses were made of the California Achievement Tests using seven sub-groups of the standardization sample: Northern White Suburban, Northern Black Urban, Southern White Suburban, Southern Black Rural, Southern White Rural,…

  3. Investigation of Particle Sampling Bias in the Shear Flow Field Downstream of a Backward Facing Step

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Kjelgaard, Scott O.; Hepner, Timothy E.

    1990-01-01

    The flow field about a backward facing step was investigated to determine the characteristics of particle sampling bias in the various flow phenomena. The investigation used the calculation of the velocity:data rate correlation coefficient as a measure of statistical dependence and thus the degree of velocity bias. While the investigation found negligible dependence within the free stream region, increased dependence was found within the boundary and shear layers. Full classic correction techniques over-compensated the data since the dependence was weak, even in the boundary layer and shear regions. The paper emphasizes the necessity to determine the degree of particle sampling bias for each measurement ensemble and not use generalized assumptions to correct the data. Further, it recommends the calculation of the velocity:data rate correlation coefficient become a standard statistical calculation in the analysis of all laser velocimeter data.

  4. Current Fluctuations in a Semiconductor Quantum Dot with Large Energy Spacing

    NASA Astrophysics Data System (ADS)

    Jeong, Heejun

    2014-12-01

    We report on the measurements of the current noise properties of electron tunneling through a split-gate GaAs quantum dot with large energy level spacing and a small number of electrons. Shot noise is full Poissonian or suppressed in the Coulomb-blockaded regime, while it is enhanced to show as super-Poissonian when an excited energy level is involved by finite source-drain bias. The results can be explained by multiple Poissonian processes through multilevel sequential tunneling.

  5. Electron transport in electrically biased inverse parabolic double-barrier structure

    NASA Astrophysics Data System (ADS)

    M, Bati; S, Sakiroglu; I, Sokmen

    2016-05-01

    A theoretical study of resonant tunneling is carried out for an inverse parabolic double-barrier structure subjected to an external electric field. Tunneling transmission coefficient and density of states are analyzed by using the non-equilibrium Green’s function approach based on the finite difference method. It is found that the resonant peak of the transmission coefficient, being unity for a symmetrical case, reduces under the applied electric field and depends strongly on the variation of the structure parameters.

  6. Antisite disorder induced spin glass and exchange bias effect in Nd2NiMnO6 epitaxial thin film

    NASA Astrophysics Data System (ADS)

    Singh, Amit Kumar; Chauhan, Samta; Chandra, Ramesh

    2017-03-01

    We report the observation of the exchange bias effect and spin glass behaviour at low temperature in a ferromagnetic Nd2NiMnO6 epitaxial thin film. Along with the ferromagnetic transition at ˜194 K, an additional transition is observed at lower temperature (˜55 K) as seen from M-T curves of the sample. A shift in the ac susceptibility peak with frequency has been observed at low temperature, which is a signature of a glassy phase within the sample. The detailed investigation of the memory effect and time dependent magnetic relaxation measurements reveals the presence of a spin glass phase in the Nd2NiMnO6 thin film. The exchange bias effect observed at low temperature in the sample has been associated with an antisite disorder induced spin glass phase, which results in a ferromagnetic/spin glass interface at low temperature. The exchange bias behaviour has been further confirmed by performing cooling field and temperature dependence of exchange bias along with training effect measurements.

  7. Estimating the price elasticity of beer: meta-analysis of data with heterogeneity, dependence, and publication bias.

    PubMed

    Nelson, Jon P

    2014-01-01

    Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Forms of attrition in a longitudinal study of religion and health in older adults and implications for sample bias

    PubMed Central

    Hayward, R. David; Krause, Neal

    2014-01-01

    The use of longitudinal designs in the field of religion and health makes it important to understand how attrition bias may affect findings in this area. This study examines attrition in a 4-wave, 8-year study of older adults. Attrition resulted in a sample biased towards more educated and more religiously-involved individuals. Conditional linear growth curve models found that trajectories of change for some variables differed among attrition categories. Ineligibles had worsening depression, declining control, and declining attendance. Mortality was associated with worsening religious coping styles. Refusers experienced worsening depression. Nevertheless, there was no evidence of bias in the key religion and health results. PMID:25257794

  9. Forms of Attrition in a Longitudinal Study of Religion and Health in Older Adults and Implications for Sample Bias.

    PubMed

    Hayward, R David; Krause, Neal

    2016-02-01

    The use of longitudinal designs in the field of religion and health makes it important to understand how attrition bias may affect findings in this area. This study examines attrition in a 4-wave, 8-year study of older adults. Attrition resulted in a sample biased toward more educated and more religiously involved individuals. Conditional linear growth curve models found that trajectories of change for some variables differed among attrition categories. Ineligibles had worsening depression, declining control, and declining attendance. Mortality was associated with worsening religious coping styles. Refusers experienced worsening depression. Nevertheless, there was no evidence of bias in the key religion and health results.

  10. Survey Response-Related Biases in Contingent Valuation: Concepts, Remedies, and Empirical Application to Valuing Aquatic Plant Management

    Treesearch

    Mark L. Messonnier; John C. Bergstrom; Chrisopher M. Cornwell; R. Jeff Teasley; H. Ken Cordell

    2000-01-01

    Simple nonresponse and selection biases that may occur in survey research such as contingent valuation applications are discussed and tested. Correction mechanisms for these types of biases are demonstrated. Results indicate the importance of testing and correcting for unit and item nonresponse bias in contingent valuation survey data. When sample nonresponse and...

  11. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  12. The moderating role of cognitive biases on the relationship between negative affective states and psychotic-like experiences in non-clinical adults.

    PubMed

    Prochwicz, Katarzyna; Kłosowska, Joanna

    2018-04-13

    Negative emotions and cognitive biases are important factors underlying psychotic symptoms and psychotic-like experiences (PLEs); however, it is not clear whether these factors interact when they influence psychotic phenomena. The aim of our study was to investigate whether psychosis-related cognitive biases moderate the relationship between negative affective states, i.e. anxiety and depression, and psychotic-like experiences. The study sample contains 251 participants who have never been diagnosed with psychiatric disorders. Anxiety, depression, cognitive biases, and psychotic-like experiences were assessed with self-report questionnaires. A moderation analysis was performed to examine the relationship between the study variables. The analyses revealed that the link between anxiety and positive PLEs is moderated by External Attribution bias, whereas the relationship between depression and positive PLEs is moderated by Attention to Threat bias. Attributional bias was also found to moderate the association between depression and negative subclinical symptoms; Jumping to Conclusions bias served as a moderator in the link between anxiety and depression and negative PLEs. Further studies in clinical samples are required to verify the moderating role of individual cognitive biases on the relationship between negative emotional states and full-blown psychotic symptoms. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Predicting financial market crashes using ghost singularities.

    PubMed

    Smug, Damian; Ashwin, Peter; Sornette, Didier

    2018-01-01

    We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of 'ghosts of finite-time singularities' is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts.

  14. Using time-dependent density functional theory in real time for calculating electronic transport

    NASA Astrophysics Data System (ADS)

    Schaffhauser, Philipp; Kümmel, Stephan

    2016-01-01

    We present a scheme for calculating electronic transport within the propagation approach to time-dependent density functional theory. Our scheme is based on solving the time-dependent Kohn-Sham equations on grids in real space and real time for a finite system. We use absorbing and antiabsorbing boundaries for simulating the coupling to a source and a drain. The boundaries are designed to minimize the effects of quantum-mechanical reflections and electrical polarization build-up, which are the major obstacles when calculating transport by applying an external bias to a finite system. We show that the scheme can readily be applied to real molecules by calculating the current through a conjugated molecule as a function of time. By comparing to literature results for the conjugated molecule and to analytic results for a one-dimensional model system we demonstrate the reliability of the concept.

  15. Predicting financial market crashes using ghost singularities

    PubMed Central

    2018-01-01

    We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of ‘ghosts of finite-time singularities’ is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts. PMID:29596485

  16. Potential Reporting Bias in Neuroimaging Studies of Sex Differences.

    PubMed

    David, Sean P; Naudet, Florian; Laude, Jennifer; Radua, Joaquim; Fusar-Poli, Paolo; Chu, Isabella; Stefanick, Marcia L; Ioannidis, John P A

    2018-04-17

    Numerous functional magnetic resonance imaging (fMRI) studies have reported sex differences. To empirically evaluate for evidence of excessive significance bias in this literature, we searched for published fMRI studies of human brain to evaluate sex differences, regardless of the topic investigated, in Medline and Scopus over 10 years. We analyzed the prevalence of conclusions in favor of sex differences and the correlation between study sample sizes and number of significant foci identified. In the absence of bias, larger studies (better powered) should identify a larger number of significant foci. Across 179 papers, median sample size was n = 32 (interquartile range 23-47.5). A median of 5 foci related to sex differences were reported (interquartile range, 2-9.5). Few articles (n = 2) had titles focused on no differences or on similarities (n = 3) between sexes. Overall, 158 papers (88%) reached "positive" conclusions in their abstract and presented some foci related to sex differences. There was no statistically significant relationship between sample size and the number of foci (-0.048% increase for every 10 participants, p = 0.63). The extremely high prevalence of "positive" results and the lack of the expected relationship between sample size and the number of discovered foci reflect probable reporting bias and excess significance bias in this literature.

  17. Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions

    PubMed Central

    Marinelli, Fabrizio; Faraldo-Gómez, José D.

    2015-01-01

    We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917

  18. The evolution of phenotypes and genetic parameters under preferential mating

    PubMed Central

    Roff, Derek A; Fairbairn, Daphne J

    2014-01-01

    This article extends and adds more realism to Lande's analytical model for evolution under mate choice by using individual-based simulations in which females sample a finite number of males and the genetic architecture of the preference and preferred trait evolves. The simulations show that the equilibrium heritabilities of the preference and preferred trait and the genetic correlation between them (rG), depend critically on aspects of the mating system (the preference function, mode of mate choice, choosiness, and number of potential mates sampled), the presence or absence of natural selection on the preferred trait, and the initial genetic parameters. Under some parameter combinations, preferential mating increased the heritability of the preferred trait, providing a possible resolution for the lek paradox. The Kirkpatrick–Barton approximation for rG proved to be biased downward, but the realized genetic correlations were also low, generally <0.2. Such low values of rG indicate that coevolution of the preference and preferred trait is likely to be very slow and subject to significant stochastic variation. Lande's model accurately predicted the incidence of runaway selection in the simulations, except where preferences were relative and the preferred trait was subject to natural selection. In these cases, runaways were over- or underestimated, depending on the number of males sampled. We conclude that rapid coevolution of preferences and preferred traits is unlikely in natural populations, but that the parameter combinations most conducive to it are most likely to occur in lekking species. PMID:25077025

  19. Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias

    PubMed Central

    Chambers, David A.; Glasgow, Russell E.

    2014-01-01

    Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853

  20. Uses and biases of volunteer water quality data

    USGS Publications Warehouse

    Loperfido, J.V.; Beyer, P.; Just, C.L.; Schnoor, J.L.

    2010-01-01

    State water quality monitoring has been augmented by volunteer monitoring programs throughout the United States. Although a significant effort has been put forth by volunteers, questions remain as to whether volunteer data are accurate and can be used by regulators. In this study, typical volunteer water quality measurements from laboratory and environmental samples in Iowa were analyzed for error and bias. Volunteer measurements of nitrate+nitrite were significantly lower (about 2-fold) than concentrations determined via standard methods in both laboratory-prepared and environmental samples. Total reactive phosphorus concentrations analyzed by volunteers were similar to measurements determined via standard methods in laboratory-prepared samples and environmental samples, but were statistically lower than the actual concentration in four of the five laboratory-prepared samples. Volunteer water quality measurements were successful in identifying and classifying most of the waters which violate United States Environmental Protection Agency recommended water quality criteria for total nitrogen (66%) and for total phosphorus (52%) with the accuracy improving when accounting for error and biases in the volunteer data. An understanding of the error and bias in volunteer water quality measurements can allow regulators to incorporate volunteer water quality data into total maximum daily load planning or state water quality reporting. ?? 2010 American Chemical Society.

  1. Sampling bias in climate-conflict research

    NASA Astrophysics Data System (ADS)

    Adams, Courtland; Ide, Tobias; Barnett, Jon; Detges, Adrien

    2018-03-01

    Critics have argued that the evidence of an association between climate change and conflict is flawed because the research relies on a dependent variable sampling strategy1-4. Similarly, it has been hypothesized that convenience of access biases the sample of cases studied (the `streetlight effect'5). This also gives rise to claims that the climate-conflict literature stigmatizes some places as being more `naturally' violent6-8. Yet there has been no proof of such sampling patterns. Here we test whether climate-conflict research is based on such a biased sample through a systematic review of the literature. We demonstrate that research on climate change and violent conflict suffers from a streetlight effect. Further, studies which focus on a small number of cases in particular are strongly informed by cases where there has been conflict, do not sample on the independent variables (climate impact or risk), and hence tend to find some association between these two variables. These biases mean that research on climate change and conflict primarily focuses on a few accessible regions, overstates the links between both phenomena and cannot explain peaceful outcomes from climate change. This could result in maladaptive responses in those places that are stigmatized as being inherently more prone to climate-induced violence.

  2. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2017-09-01

    Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.

  3. Evaluation of ACCMIP ozone simulations and ozonesonde sampling biases using a satellite-based multi-constituent chemical reanalysis

    NASA Astrophysics Data System (ADS)

    Miyazaki, Kazuyuki; Bowman, Kevin

    2017-07-01

    The Atmospheric Chemistry Climate Model Intercomparison Project (ACCMIP) ensemble ozone simulations for the present day from the 2000 decade simulation results are evaluated by a state-of-the-art multi-constituent atmospheric chemical reanalysis that ingests multiple satellite data including the Tropospheric Emission Spectrometer (TES), the Microwave Limb Sounder (MLS), the Ozone Monitoring Instrument (OMI), and the Measurement of Pollution in the Troposphere (MOPITT) for 2005-2009. Validation of the chemical reanalysis against global ozonesondes shows good agreement throughout the free troposphere and lower stratosphere for both seasonal and year-to-year variations, with an annual mean bias of less than 0.9 ppb in the middle and upper troposphere at the tropics and mid-latitudes. The reanalysis provides comprehensive spatiotemporal evaluation of chemistry-model performance that compliments direct ozonesonde comparisons, which are shown to suffer from significant sampling bias. The reanalysis reveals that the ACCMIP ensemble mean overestimates ozone in the northern extratropics by 6-11 ppb while underestimating by up to 18 ppb in the southern tropics over the Atlantic in the lower troposphere. Most models underestimate the spatial variability of the annual mean lower tropospheric concentrations in the extratropics of both hemispheres by up to 70 %. The ensemble mean also overestimates the seasonal amplitude by 25-70 % in the northern extratropics and overestimates the inter-hemispheric gradient by about 30 % in the lower and middle troposphere. A part of the discrepancies can be attributed to the 5-year reanalysis data for the decadal model simulations. However, these differences are less evident with the current sonde network. To estimate ozonesonde sampling biases, we computed model bias separately for global coverage and the ozonesonde network. The ozonesonde sampling bias in the evaluated model bias for the seasonal mean concentration relative to global coverage is 40-50 % over the western Pacific and east Indian Ocean and reaches 110 % over the equatorial Americas and up to 80 % for the global tropics. In contrast, the ozonesonde sampling bias is typically smaller than 30 % for the Arctic regions in the lower and middle troposphere. These systematic biases have implications for ozone radiative forcing and the response of chemistry to climate that can be further quantified as the satellite observational record extends to multiple decades.

  4. The Impact of Selection, Gene Conversion, and Biased Sampling on the Assessment of Microbial Demography.

    PubMed

    Lapierre, Marguerite; Blin, Camille; Lambert, Amaury; Achaz, Guillaume; Rocha, Eduardo P C

    2016-07-01

    Recent studies have linked demographic changes and epidemiological patterns in bacterial populations using coalescent-based approaches. We identified 26 studies using skyline plots and found that 21 inferred overall population expansion. This surprising result led us to analyze the impact of natural selection, recombination (gene conversion), and sampling biases on demographic inference using skyline plots and site frequency spectra (SFS). Forward simulations based on biologically relevant parameters from Escherichia coli populations showed that theoretical arguments on the detrimental impact of recombination and especially natural selection on the reconstructed genealogies cannot be ignored in practice. In fact, both processes systematically lead to spurious interpretations of population expansion in skyline plots (and in SFS for selection). Weak purifying selection, and especially positive selection, had important effects on skyline plots, showing patterns akin to those of population expansions. State-of-the-art techniques to remove recombination further amplified these biases. We simulated three common sampling biases in microbiological research: uniform, clustered, and mixed sampling. Alone, or together with recombination and selection, they further mislead demographic inferences producing almost any possible skyline shape or SFS. Interestingly, sampling sub-populations also affected skyline plots and SFS, because the coalescent rates of populations and their sub-populations had different distributions. This study suggests that extreme caution is needed to infer demographic changes solely based on reconstructed genealogies. We suggest that the development of novel sampling strategies and the joint analyzes of diverse population genetic methods are strictly necessary to estimate demographic changes in populations where selection, recombination, and biased sampling are present. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  5. The CogBIAS longitudinal study protocol: cognitive and genetic factors influencing psychological functioning in adolescence.

    PubMed

    Booth, Charlotte; Songco, Annabel; Parsons, Sam; Heathcote, Lauren; Vincent, John; Keers, Robert; Fox, Elaine

    2017-12-29

    Optimal psychological development is dependent upon a complex interplay between individual and situational factors. Investigating the development of these factors in adolescence will help to improve understanding of emotional vulnerability and resilience. The CogBIAS longitudinal study (CogBIAS-L-S) aims to combine cognitive and genetic approaches to investigate risk and protective factors associated with the development of mood and impulsivity-related outcomes in an adolescent sample. CogBIAS-L-S is a three-wave longitudinal study of typically developing adolescents conducted over 4 years, with data collection at age 12, 14 and 16. At each wave participants will undergo multiple assessments including a range of selective cognitive processing tasks (e.g. attention bias, interpretation bias, memory bias) and psychological self-report measures (e.g. anxiety, depression, resilience). Saliva samples will also be collected at the baseline assessment for genetic analyses. Multilevel statistical analyses will be performed to investigate the developmental trajectory of cognitive biases on psychological functioning, as well as the influence of genetic moderation on these relationships. CogBIAS-L-S represents the first longitudinal study to assess multiple cognitive biases across adolescent development and the largest study of its kind to collect genetic data. It therefore provides a unique opportunity to understand how genes and the environment influence the development and maintenance of cognitive biases and provide insight into risk and protective factors that may be key targets for intervention.

  6. Quality of evidence revealing subtle gender biases in science is in the eye of the beholder.

    PubMed

    Handley, Ian M; Brown, Elizabeth R; Moss-Racusin, Corinne A; Smith, Jessi L

    2015-10-27

    Scientists are trained to evaluate and interpret evidence without bias or subjectivity. Thus, growing evidence revealing a gender bias against women-or favoring men-within science, technology, engineering, and mathematics (STEM) settings is provocative and raises questions about the extent to which gender bias may contribute to women's underrepresentation within STEM fields. To the extent that research illustrating gender bias in STEM is viewed as convincing, the culture of science can begin to address the bias. However, are men and women equally receptive to this type of experimental evidence? This question was tested with three randomized, double-blind experiments-two involving samples from the general public (n = 205 and 303, respectively) and one involving a sample of university STEM and non-STEM faculty (n = 205). In all experiments, participants read an actual journal abstract reporting gender bias in a STEM context (or an altered abstract reporting no gender bias in experiment 3) and evaluated the overall quality of the research. Results across experiments showed that men evaluate the gender-bias research less favorably than women, and, of concern, this gender difference was especially prominent among STEM faculty (experiment 2). These results suggest a relative reluctance among men, especially faculty men within STEM, to accept evidence of gender biases in STEM. This finding is problematic because broadening the participation of underrepresented people in STEM, including women, necessarily requires a widespread willingness (particularly by those in the majority) to acknowledge that bias exists before transformation is possible.

  7. MAPPING THE GAS TURBULENCE IN THE COMA CLUSTER: PREDICTIONS FOR ASTRO-H

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ZuHone, J. A.; Markevitch, M.; Zhuravleva, I.

    2016-02-01

    Astro-H will be able for the first time to map gas velocities and detect turbulence in galaxy clusters. One of the best targets for turbulence studies is the Coma cluster, due to its proximity, absence of a cool core, and lack of a central active galactic nucleus. To determine what constraints Astro-H will be able to place on the Coma velocity field, we construct simulated maps of the projected gas velocity and compute the second-order structure function, an analog of the velocity power spectrum. We vary the injection scale, dissipation scale, slope, and normalization of the turbulent power spectrum, andmore » apply measurement errors and finite sampling to the velocity field. We find that even with sparse coverage of the cluster, Astro-H will be able to measure the Mach number and the injection scale of the turbulent power spectrum—the quantities determining the energy flux down the turbulent cascade and the diffusion rate for everything that is advected by the gas (metals, cosmic rays, etc.). Astro-H will not be sensitive to the dissipation scale or the slope of the power spectrum in its inertial range, unless they are outside physically motivated intervals. We give the expected confidence intervals for the injection scale and the normalization of the power spectrum for a number of possible pointing configurations, combining the structure function and velocity dispersion data. Importantly, we also determine that measurement errors on the line shift will bias the velocity structure function upward, and show how to correct this bias.« less

  8. Mapping the Gas Turbulence in the Coma Cluster: Predictions for Astro-H

    NASA Technical Reports Server (NTRS)

    ZuHone, J. A.; Markevitch, M.; Zhuravleva, I.

    2016-01-01

    Astro-H will be able for the first time to map gas velocities and detect turbulence in galaxy clusters. One of the best targets for turbulence studies is the Coma cluster, due to its proximity, absence of a cool core, and lack of a central active galactic nucleus. To determine what constraints Astro-H will be able to place on the Coma velocity field, we construct simulated maps of the projected gas velocity and compute the second-order structure function, an analog of the velocity power spectrum. We vary the injection scale, dissipation scale, slope, and normalization of the turbulent power spectrum, and apply measurement errors and finite sampling to the velocity field. We find that even with sparse coverage of the cluster, Astro-H will be able to measure the Mach number and the injection scale of the turbulent power spectrum-the quantities determining the energy flux down the turbulent cascade and the diffusion rate for everything that is advected by the gas (metals, cosmic rays, etc.). Astro-H will not be sensitive to the dissipation scale or the slope of the power spectrum in its inertial range, unless they are outside physically motivated intervals. We give the expected confidence intervals for the injection scale and the normalization of the power spectrum for a number of possible pointing configurations, combining the structure function and velocity dispersion data. Importantly, we also determine that measurement errors on the line shift will bias the velocity structure function upward, and show how to correct this bias.

  9. A novel ultra high-throughput 16S rRNA gene amplicon sequencing library preparation method for the Illumina HiSeq platform.

    PubMed

    de Muinck, Eric J; Trosvik, Pål; Gilfillan, Gregor D; Hov, Johannes R; Sundaram, Arvind Y M

    2017-07-06

    Advances in sequencing technologies and bioinformatics have made the analysis of microbial communities almost routine. Nonetheless, the need remains to improve on the techniques used for gathering such data, including increasing throughput while lowering cost and benchmarking the techniques so that potential sources of bias can be better characterized. We present a triple-index amplicon sequencing strategy to sequence large numbers of samples at significantly lower c ost and in a shorter timeframe compared to existing methods. The design employs a two-stage PCR protocol, incorpo rating three barcodes to each sample, with the possibility to add a fourth-index. It also includes heterogeneity spacers to overcome low complexity issues faced when sequencing amplicons on Illumina platforms. The library preparation method was extensively benchmarked through analysis of a mock community in order to assess biases introduced by sample indexing, number of PCR cycles, and template concentration. We further evaluated the method through re-sequencing of a standardized environmental sample. Finally, we evaluated our protocol on a set of fecal samples from a small cohort of healthy adults, demonstrating good performance in a realistic experimental setting. Between-sample variation was mainly related to batch effects, such as DNA extraction, while sample indexing was also a significant source of bias. PCR cycle number strongly influenced chimera formation and affected relative abundance estimates of species with high GC content. Libraries were sequenced using the Illumina HiSeq and MiSeq platforms to demonstrate that this protocol is highly scalable to sequence thousands of samples at a very low cost. Here, we provide the most comprehensive study of performance and bias inherent to a 16S rRNA gene amplicon sequencing method to date. Triple-indexing greatly reduces the number of long custom DNA oligos required for library preparation, while the inclusion of variable length heterogeneity spacers minimizes the need for PhiX spike-in. This design results in a significant cost reduction of highly multiplexed amplicon sequencing. The biases we characterize highlight the need for highly standardized protocols. Reassuringly, we find that the biological signal is a far stronger structuring factor than the various sources of bias.

  10. Bias in Student Survey Findings from Active Parental Consent Procedures

    ERIC Educational Resources Information Center

    Shaw, Thérèse; Cross, Donna; Thomas, Laura T.; Zubrick, Stephen R.

    2015-01-01

    Increasingly, researchers are required to obtain active (explicit) parental consent prior to surveying children and adolescents in schools. This study assessed the potential bias present in a sample of actively consented students, and in the estimates of associations between variables obtained from this sample. Students (n = 3496) from 36…

  11. Single-Receiver GPS Phase Bias Resolution

    NASA Technical Reports Server (NTRS)

    Bertiger, William I.; Haines, Bruce J.; Weiss, Jan P.; Harvey, Nathaniel E.

    2010-01-01

    Existing software has been modified to yield the benefits of integer fixed double-differenced GPS-phased ambiguities when processing data from a single GPS receiver with no access to any other GPS receiver data. When the double-differenced combination of phase biases can be fixed reliably, a significant improvement in solution accuracy is obtained. This innovation uses a large global set of GPS receivers (40 to 80 receivers) to solve for the GPS satellite orbits and clocks (along with any other parameters). In this process, integer ambiguities are fixed and information on the ambiguity constraints is saved. For each GPS transmitter/receiver pair, the process saves the arc start and stop times, the wide-lane average value for the arc, the standard deviation of the wide lane, and the dual-frequency phase bias after bias fixing for the arc. The second step of the process uses the orbit and clock information, the bias information from the global solution, and only data from the single receiver to resolve double-differenced phase combinations. It is called "resolved" instead of "fixed" because constraints are introduced into the problem with a finite data weight to better account for possible errors. A receiver in orbit has much shorter continuous passes of data than a receiver fixed to the Earth. The method has parameters to account for this. In particular, differences in drifting wide-lane values must be handled differently. The first step of the process is automated, using two JPL software sets, Longarc and Gipsy-Oasis. The resulting orbit/clock and bias information files are posted on anonymous ftp for use by any licensed Gipsy-Oasis user. The second step is implemented in the Gipsy-Oasis executable, gd2p.pl, which automates the entire process, including fetching the information from anonymous ftp

  12. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  13. Large exchange bias effect in NiFe2O4/CoO nanocomposites

    NASA Astrophysics Data System (ADS)

    Mohan, Rajendra; Prasad Ghosh, Mritunjoy; Mukherjee, Samrat

    2018-03-01

    In this work, we report the exchange bias effect of NiFe2O4/CoO nanocomposites, synthesized via chemical co-precipitation method. Four samples of different particle size ranging from 4 nm to 31 nm were prepared with the annealing temperature varying from 200 °C to 800 °C. X-ray diffraction analysis of all the samples confirmed the presence of cubic spinel phase of Nickel ferrite along with CoO phase without trace of any impurity. Sizes of the particles were studied from transmission electron micrographs and were found to be in agreement with those estimated from x-ray diffraction. Field cooled (FC) hysteresis loops at 5 K revealed an exchange bias (HE) of 2.2 kOe for the sample heated at 200 °C which decreased with the increase of particle size. Exchange bias expectedly vanished at 300 K due to high thermal energy (kBT) and low effective surface anisotropy. M-T curves revealed a blocking temperature of 135 K for the sample with smaller particle size.

  14. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  15. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie

    2008-06-01

    Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.

  16. Enhanced Conformational Sampling in Molecular Dynamics Simulations of Solvated Peptides: Fragment-Based Local Elevation Umbrella Sampling.

    PubMed

    Hansen, Halvor S; Daura, Xavier; Hünenberger, Philippe H

    2010-09-14

    A new method, fragment-based local elevation umbrella sampling (FB-LEUS), is proposed to enhance the conformational sampling in explicit-solvent molecular dynamics (MD) simulations of solvated polymers. The method is derived from the local elevation umbrella sampling (LEUS) method [ Hansen and Hünenberger , J. Comput. Chem. 2010 , 31 , 1 - 23 ], which combines the local elevation (LE) conformational searching and the umbrella sampling (US) conformational sampling approaches into a single scheme. In LEUS, an initial (relatively short) LE build-up (searching) phase is used to construct an optimized (grid-based) biasing potential within a subspace of conformationally relevant degrees of freedom, which is then frozen and used in a (comparatively longer) US sampling phase. This combination dramatically enhances the sampling power of MD simulations but, due to computational and memory costs, is only applicable to relevant subspaces of low dimensionalities. As an attempt to expand the scope of the LEUS approach to solvated polymers with more than a few relevant degrees of freedom, the FB-LEUS scheme involves an US sampling phase that relies on a superposition of low-dimensionality biasing potentials optimized using LEUS at the fragment level. The feasibility of this approach is tested using polyalanine (poly-Ala) and polyvaline (poly-Val) oligopeptides. Two-dimensional biasing potentials are preoptimized at the monopeptide level, and subsequently applied to all dihedral-angle pairs within oligopeptides of 4,  6,  8, or 10 residues. Two types of fragment-based biasing potentials are distinguished: (i) the basin-filling (BF) potentials act so as to "fill" free-energy basins up to a prescribed free-energy level above the global minimum; (ii) the valley-digging (VD) potentials act so as to "dig" valleys between the (four) free-energy minima of the two-dimensional maps, preserving barriers (relative to linearly interpolated free-energy changes) of a prescribed magnitude. The application of these biasing potentials may lead to an impressive enhancement of the searching power (volume of conformational space visited in a given amount of simulation time). However, this increase is largely offset by a deterioration of the statistical efficiency (representativeness of the biased ensemble in terms of the conformational distribution appropriate for the physical ensemble). As a result, it appears difficult to engineer FB-LEUS schemes representing a significant improvement over plain MD, at least for the systems considered here.

  17. Thermal mirror spectrometry: An experimental investigation of optical glasses

    NASA Astrophysics Data System (ADS)

    Zanuto, V. S.; Herculano, L. S.; Baesso, M. L.; Lukasievicz, G. V. B.; Jacinto, C.; Malacarne, L. C.; Astrath, N. G. C.

    2013-03-01

    The Thermal mirror technique relies on measuring laser-induced nanoscale surface deformation of a solid sample. The amplitude of the effect is directly dependent on the optical absorption and linear thermal expansion coefficients, and the time evolution depends on the heat diffusion properties of the sample. Measurement of transient signals provide direct access to thermal, optical and mechanical properties of the material. The theoretical models describing this effect can be formulated for very low optical absorbing and for absorbing materials. In addition, the theories describing the effect apply for semi-infinite and finite samples. In this work, we apply the Thermal mirror technique to measure physical properties of optical glasses. The semi-infinite and finite models are used to investigate very low optical absorbing glasses. The thickness limit for which the semi-infinite model retrieves the correct values of the thermal diffusivity and amplitude of the transient is obtained using the finite description. This procedure is also employed on absorbing glasses, and the semi-infinite Beer-Lambert law model is used to analyze the experimental data. The experimental data show the need to use the finite model for samples with very low bulk absorption coefficients and thicknesses L < 1.5 mm. This analysis helped to establish limit values of thickness for which the semi-infinite model for absorbing materials could be used, L > 1.0 mm in this case. In addition, the physical properties of the samples were calculated and absolute values derived.

  18. Improving inference for aerial surveys of bears: The importance of assumptions and the cost of unnecessary complexity.

    PubMed

    Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H

    2017-07-01

    Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.

  19. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    PubMed

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.

  20. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839

  1. Evidence of sex-bias in gene expression in the brain transcriptome of two populations of rainbow trout (Oncorhynchus mykiss) with divergent life histories.

    PubMed

    Hale, Matthew C; McKinney, Garrett J; Thrower, Frank P; Nichols, Krista M

    2018-01-01

    Sex-bias in gene expression is a mechanism that can generate phenotypic variance between the sexes, however, relatively little is known about how patterns of sex-bias vary during development, and how variable sex-bias is between different populations. To that end, we measured sex-bias in gene expression in the brain transcriptome of rainbow trout (Oncorhynchus mykiss) during the first two years of development. Our sampling included from the fry stage through to when O. mykiss either migrate to the ocean or remain resident and undergo sexual maturation. Samples came from two F1 lines: One from migratory steelhead trout and one from resident rainbow trout. All samples were reared in a common garden environment and RNA sequencing (RNA-seq) was used to estimate patterns of gene expression. A total of 1,716 (4.6% of total) genes showed evidence of sex-bias in gene expression in at least one time point. The majority (96.7%) of sex-biased genes were differentially expressed during the second year of development, indicating that patterns of sex-bias in expression are tied to key developmental events, such as migration and sexual maturation. Mapping of differentially expressed genes to the O. mykiss genome revealed that the X chromosome is enriched for female upregulated genes, and this may indicate a lack of dosage compensation in rainbow trout. There were many more sex-biased genes in the migratory line than the resident line suggesting differences in patterns of gene expression in the brain between populations subjected to different forces of selection. Overall, our results suggest that there is considerable variation in the extent and identity of genes exhibiting sex-bias during the first two years of life. These differentially expressed genes may be connected to developmental differences between the sexes, and/or between adopting a resident or migratory life history.

  2. Risk of bias reporting in the recent animal focal cerebral ischaemia literature.

    PubMed

    Bahor, Zsanett; Liao, Jing; Macleod, Malcolm R; Bannach-Brown, Alexandra; McCann, Sarah K; Wever, Kimberley E; Thomas, James; Ottavi, Thomas; Howells, David W; Rice, Andrew; Ananiadou, Sophia; Sena, Emily

    2017-10-15

    Findings from in vivo research may be less reliable where studies do not report measures to reduce risks of bias. The experimental stroke community has been at the forefront of implementing changes to improve reporting, but it is not known whether these efforts are associated with continuous improvements. Our aims here were firstly to validate an automated tool to assess risks of bias in published works, and secondly to assess the reporting of measures taken to reduce the risk of bias within recent literature for two experimental models of stroke. We developed and used text analytic approaches to automatically ascertain reporting of measures to reduce risk of bias from full-text articles describing animal experiments inducing middle cerebral artery occlusion (MCAO) or modelling lacunar stroke. Compared with previous assessments, there were improvements in the reporting of measures taken to reduce risks of bias in the MCAO literature but not in the lacunar stroke literature. Accuracy of automated annotation of risk of bias in the MCAO literature was 86% (randomization), 94% (blinding) and 100% (sample size calculation); and in the lacunar stroke literature accuracy was 67% (randomization), 91% (blinding) and 96% (sample size calculation). There remains substantial opportunity for improvement in the reporting of animal research modelling stroke, particularly in the lacunar stroke literature. Further, automated tools perform sufficiently well to identify whether studies report blinded assessment of outcome, but improvements are required in the tools to ascertain whether randomization and a sample size calculation were reported. © 2017 The Author(s).

  3. Information processing biases concurrently and prospectively predict depressive symptoms in adolescents: Evidence from a self-referent encoding task.

    PubMed

    Connolly, Samantha L; Abramson, Lyn Y; Alloy, Lauren B

    2016-01-01

    Negative information processing biases have been hypothesised to serve as precursors for the development of depression. The current study examined negative self-referent information processing and depressive symptoms in a community sample of adolescents (N = 291, Mage at baseline = 12.34 ± 0.61, 53% female, 47.4% African-American, 49.5% Caucasian and 3.1% Biracial). Participants completed a computerised self-referent encoding task (SRET) and a measure of depressive symptoms at baseline and completed an additional measure of depressive symptoms nine months later. Several negative information processing biases on the SRET were associated with concurrent depressive symptoms and predicted increases in depressive symptoms at follow-up. Findings partially support the hypothesis that negative information processing biases are associated with depressive symptoms in a nonclinical sample of adolescents, and provide preliminary evidence that these biases prospectively predict increases in depressive symptoms.

  4. Implicit Social Biases in People with Autism

    PubMed Central

    Birmingham, Elina; Stanley, Damian; Nair, Remya; Adolphs, Ralph

    2015-01-01

    Implicit social biases are ubiquitous and are known to influence social behavior. A core diagnostic criterion of Autism Spectrum Disorder (ASD) is abnormal social behavior. Here we investigated the extent to which individuals with ASD might show a specific attenuation of implicit social biases, using the Implicit Association Test (IAT) across Social (gender, race) and Nonsocial (flowers/insect, shoes) categories. High-functioning adults with ASD showed intact but reduced IAT effects relative to healthy controls. Importantly, we observed no selective attenuation of implicit social (vs. nonsocial) biases in our ASD population. To extend these results, we collected data from a large online sample of the general population, and explored correlations between autistic traits and IAT effects. No associations were found between autistic traits and IAT effects for any of the categories tested in our online sample. Taken together, these results suggest that implicit social biases, as measured by the IAT, are largely intact in ASD. PMID:26386014

  5. A machine learning model with human cognitive biases capable of learning from small and biased datasets.

    PubMed

    Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro

    2018-05-09

    Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.

  6. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    NASA Astrophysics Data System (ADS)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the rank-deficiency makes it improbable to solve for both STFs. To solve for the larger STF we need to assume the shape of the small STF to be known a priori. Thus, the reliability of the estimated large STF depends on the difference between the assumed and true shapes of the small STF. We will show how the reliability varies with realistic scenarios.

  7. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  8. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  9. Broadband/Wideband Magnetoelectric Response

    DOE PAGES

    Park, Chee-Sung; Priya, Shashank

    2012-01-01

    A broadband/wideband magnetoelectric (ME) composite offers new opportunities for sensing wide ranges of both DC and AC magnetic fields. The broadband/wideband behavior is characterized by flat ME response over a given AC frequency range and DC magnetic bias. The structure proposed in this study operates in the longitudinal-transversal (L-T) mode. In this paper, we provide information on (i) how to design broadband/wideband ME sensors and (ii) how to control the magnitude of ME response over a desired frequency and DC bias regime. A systematic study was conducted to identify the factors affecting the broadband/wideband behavior by developing experimental models andmore » validating them against the predictions made through finite element modeling. A working prototype of the sensor with flat bands for both DC and AC magnetic field conditions was successfully obtained. These results are quite promising for practical applications such as current probe, low-frequency magnetic field sensing, and ME energy harvester.« less

  10. Rational group decision making: A random field Ising model at T = 0

    NASA Astrophysics Data System (ADS)

    Galam, Serge

    1997-02-01

    A modified version of a finite random field Ising ferromagnetic model in an external magnetic field at zero temperature is presented to describe group decision making. Fields may have a non-zero average. A postulate of minimum inter-individual conflicts is assumed. Interactions then produce a group polarization along one very choice which is however randomly selected. A small external social pressure is shown to have a drastic effect on the polarization. Individual bias related to personal backgrounds, cultural values and past experiences are introduced via quenched local competing fields. They are shown to be instrumental in generating a larger spectrum of collective new choices beyond initial ones. In particular, compromise is found to results from the existence of individual competing bias. Conflict is shown to weaken group polarization. The model yields new psychosociological insights about consensus and compromise in groups.

  11. Nonequilibrium theory of tunneling into a localized state in a superconductor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Ivar; Mozyrsky, Dmitry

    2014-09-01

    A single static magnetic impurity in a fully gapped superconductor leads to the formation of an intragap quasiparticle bound state. At temperatures much below the superconducting transition, the energy relaxation and spin dephasing of the state are expected to be exponentially suppressed. The presence of such a state can be detected in electron tunneling experiments as a pair of conductance peaks at positive and negative biases. Here we show that, for an arbitrarily weak tunneling strength, the peaks have to be symmetric with respect to the applied bias. This is in contrast to the standard result in which the tunnelingmore » conductance is proportional to the local (in general, particle-hole asymmetric) density of states. The asymmetry can be recovered if one allows for either a finite density of impurity states, or if impurities are coupled to another, nonsuperconducting, equilibrium bath.« less

  12. Adaptive control and noise suppression by a variable-gain gradient algorithm

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.; Mehta, R. S.

    1987-01-01

    An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.

  13. Difference between blocking and Néel temperatures in the exchange biased Fe3O4/CoO system.

    PubMed

    van der Zaag, P J; Ijiri, Y; Borchers, J A; Feiner, L F; Wolf, R M; Gaines, J M; Erwin, R W; Verheijen, M A

    2000-06-26

    The blocking temperature T(B) has been determined as a function of the antiferromagnetic layer thickness in the Fe3O4/CoO exchange biased system. For CoO layers thinner than 50 A, T(B) is reduced below the Néel temperature T(N) of bulk CoO (291 K), independent of crystallographic orientation or film substrate ( alpha-Al2O3, SrTiO3, and MgO). Neutron diffraction studies show that T(B) does not track the CoO ordering temperature and, hence, that this reduction in T(B) does not arise from finite-size scaling. Instead, the ordering temperature of the CoO layers is enhanced above the bulk T(N) for layer thicknesses approximately less than or equal to 100 A due to the proximity of magnetic Fe3O4 layers.

  14. Conservative Tests under Satisficing Models of Publication Bias.

    PubMed

    McCrary, Justin; Christensen, Garret; Fanelli, Daniele

    2016-01-01

    Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%-rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs.

  15. Conservative Tests under Satisficing Models of Publication Bias

    PubMed Central

    McCrary, Justin; Christensen, Garret; Fanelli, Daniele

    2016-01-01

    Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs. PMID:26901834

  16. Brief Communication: Buoyancy-Induced Differences in Soot Morphology

    NASA Technical Reports Server (NTRS)

    Ku, Jerry C.; Griffin, Devon W.; Greenberg, Paul S.; Roma, John

    1995-01-01

    Reduction or elimination of buoyancy in flames affects the dominant mechanisms driving heat transfer, burning rates and flame shape. The absence of buoyancy produces longer residence times for soot formation, clustering and oxidation. In addition, soot pathlines are strongly affected in microgravity. We recently conducted the first experiments comparing soot morphology in normal and reduced-gravity laminar gas jet diffusion flames. Thermophoretic sampling is a relatively new but well-established technique for studying the morphology of soot primaries and aggregates. Although there have been some questions about biasing that may be induced due to sampling, recent analysis by Rosner et al. showed that the sample is not biased when the system under study is operating in the continuum limit. Furthermore, even if the sampling is preferentially biased to larger aggregates, the size-invariant premise of fractal analysis should produce a correct fractal dimension.

  17. Exploratory Studies of Bias in Achievement Tests.

    ERIC Educational Resources Information Center

    Green, Donald Ross; Draper, John F.

    This paper considers the question of bias in group administered academic achievement tests, bias which is inherent in the instruments themselves. A body of data on the test of performance of three disadvantaged minority groups--northern, urban black; southern, rural black; and, southwestern, Mexican-Americans--as tryout samples in contrast to…

  18. Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly

    ERIC Educational Resources Information Center

    Bell, Stephen H.; Olsen, Robert B.; Orr, Larry L.; Stuart, Elizabeth A.

    2016-01-01

    Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were…

  19. A "Scientific Diversity" Intervention to Reduce Gender Bias in a Sample of Life Scientists

    ERIC Educational Resources Information Center

    Moss-Racusin, Corinne A.; van der Toorn, Jojanneke; Dovidio, John F.; Brescoll, Victoria L.; Graham, Mark J.; Handelsman, Jo

    2016-01-01

    Mounting experimental evidence suggests that subtle gender biases favoring men contribute to the underrepresentation of women in science, technology, engineering, and mathematics (STEM), including many subfields of the life sciences. However, there are relatively few evaluations of diversity interventions designed to reduce gender biases within…

  20. Internal Standards: A Source of Analytical Bias For Volatile Organic Analyte Determinations

    EPA Science Inventory

    The use of internal standards in the determination of volatile organic compounds as described in SW-846 Method 8260C introduces a potential for bias in results once the internal standards (ISTDs) are added to a sample for analysis. The bias is relative to the dissimilarity betw...

  1. Measuring coverage in MNCH: design, implementation, and interpretation challenges associated with tracking vaccination coverage using household surveys.

    PubMed

    Cutts, Felicity T; Izurieta, Hector S; Rhoda, Dale A

    2013-01-01

    Vaccination coverage is an important public health indicator that is measured using administrative reports and/or surveys. The measurement of vaccination coverage in low- and middle-income countries using surveys is susceptible to numerous challenges. These challenges include selection bias and information bias, which cannot be solved by increasing the sample size, and the precision of the coverage estimate, which is determined by the survey sample size and sampling method. Selection bias can result from an inaccurate sampling frame or inappropriate field procedures and, since populations likely to be missed in a vaccination coverage survey are also likely to be missed by vaccination teams, most often inflates coverage estimates. Importantly, the large multi-purpose household surveys that are often used to measure vaccination coverage have invested substantial effort to reduce selection bias. Information bias occurs when a child's vaccination status is misclassified due to mistakes on his or her vaccination record, in data transcription, in the way survey questions are presented, or in the guardian's recall of vaccination for children without a written record. There has been substantial reliance on the guardian's recall in recent surveys, and, worryingly, information bias may become more likely in the future as immunization schedules become more complex and variable. Finally, some surveys assess immunity directly using serological assays. Sero-surveys are important for assessing public health risk, but currently are unable to validate coverage estimates directly. To improve vaccination coverage estimates based on surveys, we recommend that recording tools and practices should be improved and that surveys should incorporate best practices for design, implementation, and analysis.

  2. Simultaneous Nanoscale Surface Charge and Topographical Mapping.

    PubMed

    Perry, David; Al Botros, Rehab; Momotenko, Dmitry; Kinnear, Sophie L; Unwin, Patrick R

    2015-07-28

    Nanopipettes are playing an increasingly prominent role in nanoscience, for sizing, sequencing, delivery, detection, and mapping interfacial properties. Herein, the question of how to best resolve topography and surface charge effects when using a nanopipette as a probe for mapping in scanning ion conductance microscopy (SICM) is addressed. It is shown that, when a bias modulated (BM) SICM scheme is used, it is possible to map the topography faithfully, while also allowing surface charge to be estimated. This is achieved by applying zero net bias between the electrode in the SICM tip and the one in bulk solution for topographical mapping, with just a small harmonic perturbation of the potential to create an AC current for tip positioning. Then, a net bias is applied, whereupon the ion conductance current becomes sensitive to surface charge. Practically this is optimally implemented in a hopping-cyclic voltammetry mode where the probe is approached at zero net bias at a series of pixels across the surface to reach a defined separation, and then a triangular potential waveform is applied and the current response is recorded. Underpinned with theoretical analysis, including finite element modeling of the DC and AC components of the ionic current flowing through the nanopipette tip, the powerful capabilities of this approach are demonstrated with the probing of interfacial acid-base equilibria and high resolution imaging of surface charge heterogeneities, simultaneously with topography, on modified substrates.

  3. Effects of interpretation training on hostile attribution bias and reactivity to interpersonal insult.

    PubMed

    Hawkins, Kirsten A; Cougle, Jesse R

    2013-09-01

    Research suggests that individuals high in anger have a bias for attributing hostile intentions to ambiguous situations. The current study tested whether this interpretation bias can be altered to influence anger reactivity to an interpersonal insult using a single-session cognitive bias modification program. One hundred thirty-five undergraduate students were randomized to receive positive training, negative training, or a control condition. Anger reactivity to insult was then assessed. Positive training led to significantly greater increases in positive interpretation bias relative to the negative group, though these increases were only marginally greater than the control group. Negative training led to increased negative interpretation bias relative to other groups. During the insult, participants in the positive condition reported less anger than those in the control condition. Observers rated participants in the positive condition as less irritated than those in the negative condition and more amused than the other two conditions. Though mediation of effects via bias modification was not demonstrated, among the positive condition posttraining interpretation bias was correlated with self-reported anger, suggesting that positive training reduced anger reactivity by influencing interpretation biases. Findings suggest that positive interpretation training may be a promising treatment for reducing anger. However, the current study was conducted with a non-treatment-seeking student sample; further research with a treatment-seeking sample with problematic anger is necessary. Copyright © 2013. Published by Elsevier Ltd.

  4. Large biases in regression-based constituent flux estimates: causes and diagnostic tools

    USGS Publications Warehouse

    Hirsch, Robert M.

    2014-01-01

    It has been documented in the literature that, in some cases, widely used regression-based models can produce severely biased estimates of long-term mean river fluxes of various constituents. These models, estimated using sample values of concentration, discharge, and date, are used to compute estimated fluxes for a multiyear period at a daily time step. This study compares results of the LOADEST seven-parameter model, LOADEST five-parameter model, and the Weighted Regressions on Time, Discharge, and Season (WRTDS) model using subsampling of six very large datasets to better understand this bias problem. This analysis considers sample datasets for dissolved nitrate and total phosphorus. The results show that LOADEST-7 and LOADEST-5, although they often produce very nearly unbiased results, can produce highly biased results. This study identifies three conditions that can give rise to these severe biases: (1) lack of fit of the log of concentration vs. log discharge relationship, (2) substantial differences in the shape of this relationship across seasons, and (3) severely heteroscedastic residuals. The WRTDS model is more resistant to the bias problem than the LOADEST models but is not immune to them. Understanding the causes of the bias problem is crucial to selecting an appropriate method for flux computations. Diagnostic tools for identifying the potential for bias problems are introduced, and strategies for resolving bias problems are described.

  5. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Item Calibration Samples and the Stability of Achievement Estimates and System Rankings: Another Look at the PISA Model

    ERIC Educational Resources Information Center

    Rutkowski, Leslie; Rutkowski, David; Zhou, Yan

    2016-01-01

    Using an empirically-based simulation study, we show that typically used methods of choosing an item calibration sample have significant impacts on achievement bias and system rankings. We examine whether recent PISA accommodations, especially for lower performing participants, can mitigate some of this bias. Our findings indicate that standard…

  7. Estimation and applications of size-biased distributions in forestry

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Size-biased distributions arise naturally in several contexts in forestry and ecology. Simple power relationships (e.g. basal area and diameter at breast height) between variables are one such area of interest arising from a modelling perspective. Another, probability proportional to size PPS) sampling, is found in the most widely used methods for sampling standing or...

  8. The Evaluation of Bias of the Weighted Random Effects Model Estimators. Research Report. ETS RR-11-13

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…

  9. Bias and Precision of Measures of Association for a Fixed-Effect Multivariate Analysis of Variance Model

    ERIC Educational Resources Information Center

    Kim, Soyoung; Olejnik, Stephen

    2005-01-01

    The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…

  10. Improved Healing of Large, Osseous, Segmental Defects by Reverse Dynamization: Evaluation in a Sheep Model

    DTIC Science & Technology

    2017-12-01

    reverse dynamization. This was supplemented by finite element analysis and the use of a strain gauge. This aim was successfully completed, with the...testing deformation results for model validation. Development of a Finite Element (FE) model was conducted through ANSYS 16 to help characterize...Fixators were characterized through mechanical testing by sawbone and ovine cadaver tibiae samples, and data was used to validate a finite element

  11. Is there gender bias in nursing research?

    PubMed

    Polit, Denise F; Beck, Cheryl Tatano

    2008-10-01

    Using data from a consecutive sample of 259 studies published in four leading nursing research journals in 2005-2006, we examined whether nurse researchers favor females as study participants. On average, 75.3% of study participants were female, and 38% of studies had all-female samples. The bias favoring female participants was statistically significant and persistent. The bias was observed regardless of funding source, methodological features, and other participant and researcher characteristics, with one exception: studies that had male investigators had more sex-balanced samples. When designing studies, nurse researchers need to pay close attention to who will benefit from their research and to whether they are leaving out a specific group about which there is a gap in knowledge. (c) 2008 Wiley Periodicals, Inc.

  12. Correction of bias in belt transect studies of immotile objects

    USGS Publications Warehouse

    Anderson, D.R.; Pospahala, R.S.

    1970-01-01

    Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.

  13. Surface sampling techniques for 3D object inspection

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong S.; Gerhardt, Lester A.

    1995-03-01

    While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.

  14. On sampling biases arising from insufficient bottle flushing

    NASA Astrophysics Data System (ADS)

    Codispoti, L. A.; Paver, C. R.

    2016-02-01

    Collection of representative water samples using carousel bottles is important for accurately determining biological and chemical gradients. The development of more technologically advanced instrumentation and sampling apparatus causes sampling packages to increase and "soak times" to decrease, increasing the probability that insufficient bottle flushing will produce biased results. Qualitative evidence from various expeditions suggest that insufficient flushing may be a problem. Here we report on multiple field experiments that were conducted to better quantify the errors that can arise from insufficient bottle flushing. Our experiments suggest that soak times of more than 2 minutes are sometimes required to collect a representative sample.

  15. Quality of major ion and total dissolved solids data from groundwater sampled by the National Water-Quality Assessment Program, 1992–2010

    USGS Publications Warehouse

    Gross, Eliza L.; Lindsey, Bruce D.; Rupert, Michael G.

    2012-01-01

    Field blank samples help determine the frequency and magnitude of contamination bias, and replicate samples help determine the sampling variability (error) of measured analyte concentrations. Quality control data were evaluated for calcium, magnesium, sodium, potassium, chloride, sulfate, fluoride, silica, and total dissolved solids. A 99-percent upper confidence limit is calculated from field blanks to assess the potential for contamination bias. For magnesium, potassium, chloride, sulfate, and fluoride, potential contamination in more than 95 percent of environmental samples is less than or equal to the common maximum reporting level. Contamination bias has little effect on measured concentrations greater than 4.74 mg/L (milligrams per liter) for calcium, 14.98 mg/L for silica, 4.9 mg/L for sodium, and 120 mg/L for total dissolved solids. Estimates of sampling variability are calculated for high and low ranges of concentration for major ions and total dissolved solids. Examples showing the calculation of confidence intervals and how to determine whether measured differences between two water samples are significant are presented.

  16. Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review

    PubMed Central

    Miao, Yinglong; McCammon, J. Andrew

    2016-01-01

    Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631

  17. Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review.

    PubMed

    Miao, Yinglong; McCammon, J Andrew

    Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations.

  18. Real space mapping of oxygen vacancy diffusion and electrochemical transformations by hysteretic current reversal curve measurements

    DOEpatents

    Kalinin, Sergei V.; Balke, Nina; Borisevich, Albina Y.; Jesse, Stephen; Maksymovych, Petro; Kim, Yunseok; Strelcov, Evgheni

    2014-06-10

    An excitation voltage biases an ionic conducting material sample over a nanoscale grid. The bias sweeps a modulated voltage with increasing maximal amplitudes. A current response is measured at grid locations. Current response reversal curves are mapped over maximal amplitudes of the bias cycles. Reversal curves are averaged over the grid for each bias cycle and mapped over maximal bias amplitudes for each bias cycle. Average reversal curve areas are mapped over maximal amplitudes of the bias cycles. Thresholds are determined for onset and ending of electrochemical activity. A predetermined number of bias sweeps may vary in frequency where each sweep has a constant number of cycles and reversal response curves may indicate ionic diffusion kinetics.

  19. Monitoring landscape metrics by point sampling: accuracy in estimating Shannon's diversity and edge density.

    PubMed

    Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran

    2010-05-01

    Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.

  20. Response Rates and Response Bias for 50 Surveys of Pediatricians

    PubMed Central

    Cull, William L; O'Connor, Karen G; Sharp, Sanford; Tang, Suk-fong S

    2005-01-01

    Research Objective To track response rates across time for surveys of pediatricians, to explore whether response bias is present for these surveys, and to examine whether response bias increases with lower response rates. Data Source/Study Setting A total of 63,473 cases were gathered from 50 different surveys of pediatricians conducted by the American Academy of Pediatrics (AAP) since 1994. Thirty-one surveys targeted active U.S. members of the AAP, six targeted pediatric residents, and the remaining 13 targeted AAP-member and nonmember pediatric subspecialists. Information for the full target samples, including nonrespondents, was collected using administrative databases of the AAP and the American Board of Pediatrics. Study Design To assess bias for each survey, age, gender, location, and AAP membership type were compared for respondents and the full target sample. Correlational analyses were conducted to examine whether surveys with lower response rates had increasing levels of response bias. Principal Findings Response rates to the 50 surveys examined declined significantly across survey years (1994–2002). Response rates ranged from 52 to 81 percent with an average of 68 percent. Comparisons between respondents and the full target samples showed the respondent group to be younger, to have more females, and to have less specialty-fellow members. Response bias was not apparent for pediatricians' geographical location. The average response bias, however, was fairly small for all factors: age (0.45 years younger), gender (1.4 percentage points more females), and membership type (1.1 percentage points fewer specialty-fellow members). Gender response bias was found to be inversely associated with survey response rates (r=−0.38). Even for the surveys with the lowest response rates, amount of response bias never exceeded 5 percentage points for gender, 3 years for age, or 3 percent for membership type. Conclusions While response biases favoring women, young physicians, and nonspecialty-fellow members were found across the 52–81 percent response rates examined in this study, the amount of bias was minimal for these factors that could be tested. At least for surveys of pediatricians, more attention should be devoted by investigators to assessments of response bias rather than relying on response rates as a proxy of response bias. PMID:15663710

  1. Attention bias in adults with anorexia nervosa, obsessive-compulsive disorder, and social anxiety disorder

    PubMed Central

    Schneier, Franklin R.; Kimeldorf, Marcia B.; Choo, Tse; Steinglass, Joanna E.; Wall, Melanie; Fyer, Abby J.; Simpson, H. Blair

    2016-01-01

    Background Attention bias to threat (selective attention toward threatening stimuli) has been frequently found in anxiety disorder samples, but its distribution both within and beyond this category is unclear. Attention bias has been studied extensively in social anxiety disorder (SAD) but relatively little in obsessive compulsive disorder (OCD), historically considered an anxiety disorder, or anorexia nervosa (AN), which is often characterized by interpersonal as well as body image/eating fears. Methods Medication-free adults with SAD (n=43), OCD (n=50), or AN (n=30), and healthy control volunteers (HC, n=74) were evaluated for attention bias with an established dot probe task presenting images of angry and neutral faces. Additional outcomes included attention bias variability (ABV), which summarizes fluctuation in attention between vigilance and avoidance, and has been reported to have superior reliability. We hypothesized that attention bias would be elevated in SAD and associated with SAD severity. Results Attention bias in each disorder did not differ from HC, but within the SAD group attention bias correlated significantly with severity of social avoidance. ABV was significantly lower in OCD versus HC, and it correlated positively with severity of OCD symptoms within the OCD group. Conclusions Findings do not support differences from HC in attention bias to threat faces for SAD, OCD, or AN. Within the SAD sample, the association of attention bias with severity of social avoidance is consistent with evidence that attention bias moderates development of social withdrawal. The association of ABV with OCD diagnosis and severity is novel and deserves further study. PMID:27174402

  2. Quality of evidence revealing subtle gender biases in science is in the eye of the beholder

    PubMed Central

    Handley, Ian M.; Brown, Elizabeth R.; Moss-Racusin, Corinne A.; Smith, Jessi L.

    2015-01-01

    Scientists are trained to evaluate and interpret evidence without bias or subjectivity. Thus, growing evidence revealing a gender bias against women—or favoring men—within science, technology, engineering, and mathematics (STEM) settings is provocative and raises questions about the extent to which gender bias may contribute to women’s underrepresentation within STEM fields. To the extent that research illustrating gender bias in STEM is viewed as convincing, the culture of science can begin to address the bias. However, are men and women equally receptive to this type of experimental evidence? This question was tested with three randomized, double-blind experiments—two involving samples from the general public (n = 205 and 303, respectively) and one involving a sample of university STEM and non-STEM faculty (n = 205). In all experiments, participants read an actual journal abstract reporting gender bias in a STEM context (or an altered abstract reporting no gender bias in experiment 3) and evaluated the overall quality of the research. Results across experiments showed that men evaluate the gender-bias research less favorably than women, and, of concern, this gender difference was especially prominent among STEM faculty (experiment 2). These results suggest a relative reluctance among men, especially faculty men within STEM, to accept evidence of gender biases in STEM. This finding is problematic because broadening the participation of underrepresented people in STEM, including women, necessarily requires a widespread willingness (particularly by those in the majority) to acknowledge that bias exists before transformation is possible. PMID:26460001

  3. Standardized mean differences cause funnel plot distortion in publication bias assessments.

    PubMed

    Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E

    2017-09-08

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.

  4. Standardized mean differences cause funnel plot distortion in publication bias assessments

    PubMed Central

    Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris AH; Chamuleau, Steven AJ; MacLeod, Malcolm R

    2017-01-01

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results. PMID:28884685

  5. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  6. Development of theoretical approach for describing electronic properties of hetero-interface systems under applied bias voltage.

    PubMed

    Iida, Kenji; Noda, Masashi; Nobusada, Katsuyuki

    2017-02-28

    We have developed a theoretical approach for describing the electronic properties of hetero-interface systems under an applied electrode bias. The finite-temperature density functional theory is employed for controlling the chemical potential in their interfacial region, and thereby the electronic charge of the system is obtained. The electric field generated by the electronic charging is described as a saw-tooth-like electrostatic potential. Because of the continuum approximation of dielectrics sandwiched between electrodes, we treat dielectrics with thicknesses in a wide range from a few nanometers to more than several meters. Furthermore, the approach is implemented in our original computational program named grid-based coupled electron and electromagnetic field dynamics (GCEED), facilitating its application to nanostructures. Thus, the approach is capable of comprehensively revealing electronic structure changes in hetero-interface systems with an applied bias that are practically useful for experimental studies. We calculate the electronic structure of a SiO 2 -graphene-boron nitride (BN) system in which an electrode bias is applied between the graphene layer and an electrode attached on the SiO 2 film. The electronic energy barrier between graphene and BN is varied with an applied bias, and the energy variation depends on the thickness of the BN film. This is because the density of states of graphene is so low that the graphene layer cannot fully screen the electric field generated by the electrodes. We have demonstrated that the electronic properties of hetero-interface systems are well controlled by the combination of the electronic charging and the generated electric field.

  7. Sampling Versus Filtering in Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Debliquy, O.; Knaepen, B.; Carati, D.; Wray, A. A.

    2004-01-01

    A LES formalism in which the filter operator is replaced by a sampling operator is proposed. The unknown quantities that appear in the LES equations originate only from inadequate resolution (Discretization errors). The resulting viewpoint seems to make a link between finite difference approaches and finite element methods. Sampling operators are shown to commute with nonlinearities and to be purely projective. Moreover, their use allows an unambiguous definition of the LES numerical grid. The price to pay is that sampling never commutes with spatial derivatives and the commutation errors must be modeled. It is shown that models for the discretization errors may be treated using the dynamic procedure. Preliminary results, using the Smagorinsky model, are very encouraging.

  8. Anticipation or ascertainment bias in schizophrenia? Penrose`s familial mental illness sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, A.S.; Husted, J.

    Several studies have observed anticipation (earlier age at onset [AAO] in successive generations) in familial schizophrenia. However, whether true anticipation or ascertainment bias is the principal originating mechanism remains unclear. In 1944 L.S. Penrose collected AAO data on a large, representative sample of familial mental illness, using a broad ascertainment strategy. These data allowed examination of anticipation and ascertainment biases in five two-generation samples of affected relative pairs. The median intergenerational difference (MID) in AAO was used to assess anticipation. Results showed significant anticipation in parent-offspring pairs with schizophrenia (n = 137 pairs; MID 15 years; P = .0001) andmore » in a positive control sample with Huntington disease (n = 11; P = .01). Broadening the diagnosis of the schizophrenia sample suggested anticipation of severity of illness. However, other analyses provided evidence for ascertainment bias, especially in later-AAO parents, in parent-offspring pairs. Aunt/uncle-niece/nephew schizophrenia pairs showed anticipation (n = 111; P = .0001), but the MID was 8 years and aunts/uncles had earlier median AAO than parents. Anticipation effects were greatest in pairs with late-AAO parents but remained significant in a subgroup of schizophrenia pairs with early parental AAO (n = 31; P = .03). A small control sample of other diseases had MID of 5 years but no significant anticipation (n = 9; F = .38). These results suggest that, although ascertainment-bias effects were observed in parent-offspring pairs, true anticipation appears to be inherent in the transmission of familial schizophrenia. The findings support investigations of unstable mutations and other mechanisms that may contribute to true anticipation in schizophrenia. 37 refs., 2 tabs.« less

  9. Personality, Attentional Biases towards Emotional Faces and Symptoms of Mental Disorders in an Adolescent Sample.

    PubMed

    O'Leary-Barrett, Maeve; Pihl, Robert O; Artiges, Eric; Banaschewski, Tobias; Bokde, Arun L W; Büchel, Christian; Flor, Herta; Frouin, Vincent; Garavan, Hugh; Heinz, Andreas; Ittermann, Bernd; Mann, Karl; Paillère-Martinot, Marie-Laure; Nees, Frauke; Paus, Tomas; Pausova, Zdenka; Poustka, Luise; Rietschel, Marcella; Robbins, Trevor W; Smolka, Michael N; Ströhle, Andreas; Schumann, Gunter; Conrod, Patricia J

    2015-01-01

    To investigate the role of personality factors and attentional biases towards emotional faces, in establishing concurrent and prospective risk for mental disorder diagnosis in adolescence. Data were obtained as part of the IMAGEN study, conducted across 8 European sites, with a community sample of 2257 adolescents. At 14 years, participants completed an emotional variant of the dot-probe task, as well two personality measures, namely the Substance Use Risk Profile Scale and the revised NEO Personality Inventory. At 14 and 16 years, participants and their parents were interviewed to determine symptoms of mental disorders. Personality traits were general and specific risk indicators for mental disorders at 14 years. Increased specificity was obtained when investigating the likelihood of mental disorders over a 2-year period, with the Substance Use Risk Profile Scale showing incremental validity over the NEO Personality Inventory. Attentional biases to emotional faces did not characterise or predict mental disorders examined in the current sample. Personality traits can indicate concurrent and prospective risk for mental disorders in a community youth sample, and identify at-risk youth beyond the impact of baseline symptoms. This study does not support the hypothesis that attentional biases mediate the relationship between personality and psychopathology in a community sample. Task and sample characteristics that contribute to differing results among studies are discussed.

  10. Personality, Attentional Biases towards Emotional Faces and Symptoms of Mental Disorders in an Adolescent Sample

    PubMed Central

    O’Leary-Barrett, Maeve; Pihl, Robert O.; Artiges, Eric; Banaschewski, Tobias; Bokde, Arun L. W.; Büchel, Christian; Flor, Herta; Frouin, Vincent; Garavan, Hugh; Heinz, Andreas; Ittermann, Bernd; Mann, Karl; Paillère-Martinot, Marie-Laure; Nees, Frauke; Paus, Tomas; Pausova, Zdenka; Poustka, Luise; Rietschel, Marcella; Robbins, Trevor W.; Smolka, Michael N.; Ströhle, Andreas; Schumann, Gunter; Conrod, Patricia J.

    2015-01-01

    Objective To investigate the role of personality factors and attentional biases towards emotional faces, in establishing concurrent and prospective risk for mental disorder diagnosis in adolescence. Method Data were obtained as part of the IMAGEN study, conducted across 8 European sites, with a community sample of 2257 adolescents. At 14 years, participants completed an emotional variant of the dot-probe task, as well two personality measures, namely the Substance Use Risk Profile Scale and the revised NEO Personality Inventory. At 14 and 16 years, participants and their parents were interviewed to determine symptoms of mental disorders. Results Personality traits were general and specific risk indicators for mental disorders at 14 years. Increased specificity was obtained when investigating the likelihood of mental disorders over a 2-year period, with the Substance Use Risk Profile Scale showing incremental validity over the NEO Personality Inventory. Attentional biases to emotional faces did not characterise or predict mental disorders examined in the current sample. Discussion Personality traits can indicate concurrent and prospective risk for mental disorders in a community youth sample, and identify at-risk youth beyond the impact of baseline symptoms. This study does not support the hypothesis that attentional biases mediate the relationship between personality and psychopathology in a community sample. Task and sample characteristics that contribute to differing results among studies are discussed. PMID:26046352

  11. Practical guidance on characterizing availability in resource selection functions under a use-availability design

    USGS Publications Warehouse

    Northrup, Joseph M.; Hooten, Mevin B.; Anderson, Charles R.; Wittemyer, George

    2013-01-01

    Habitat selection is a fundamental aspect of animal ecology, the understanding of which is critical to management and conservation. Global positioning system data from animals allow fine-scale assessments of habitat selection and typically are analyzed in a use-availability framework, whereby animal locations are contrasted with random locations (the availability sample). Although most use-availability methods are in fact spatial point process models, they often are fit using logistic regression. This framework offers numerous methodological challenges, for which the literature provides little guidance. Specifically, the size and spatial extent of the availability sample influences coefficient estimates potentially causing interpretational bias. We examined the influence of availability on statistical inference through simulations and analysis of serially correlated mule deer GPS data. Bias in estimates arose from incorrectly assessing and sampling the spatial extent of availability. Spatial autocorrelation in covariates, which is common for landscape characteristics, exacerbated the error in availability sampling leading to increased bias. These results have strong implications for habitat selection analyses using GPS data, which are increasingly prevalent in the literature. We recommend researchers assess the sensitivity of their results to their availability sample and, where bias is likely, take care with interpretations and use cross validation to assess robustness.

  12. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  13. Strain analysis and microstructural evolution characteristic of neoproterozoic rocks associations of Wadi El Falek, centre Eastern Desert, Egypt

    NASA Astrophysics Data System (ADS)

    Kassem, Osama M. K.; Rahim, Said H. Abd El; Nashar, El Said R. El

    2012-09-01

    The estimation of finite strain in rocks is fundamental to a meaningful understanding of deformational processes and products on all scales from microscopic fabric development to regional structural analyses. The Rf/φ and Fry methods on feldspar porphyroclasts and mafic grains from 5 granite, 1 metavolcanic, 3 metasedimentary and 1 granodiorite samples were used in Wadi El Falek region. Finite-strain data shows that a high to moderate range of deformation of the granitic to metavolcano-sedimentary samples and axial ratios in the XZ section range from 1.60 to 4.10 for the Rf/φ method and from 2.80 to 4.90 for the Fry method. Furthermore, the short axes are subvertical associated with a subhorizontal foliation. We conclude that finite strain in the deformed granite rocks is of the same order of magnitude as that from metavolcano-sedimentary rocks. Furthermore, contacts formed during intrusion of plutons with some faults in the Wadi El Falek area under brittle to semi-ductile deformation conditions. In this case, finite strain accumulated during superimposed deformation on the already assembled nappe structure. It indicates that the nappe contacts formed during the accumulation of finite strain.

  14. Prediction and standard error estimation for a finite universe total when a stratum is not sampled

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, T.

    1994-01-01

    In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time,more » the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.« less

  15. Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data

    PubMed Central

    CHEN, SHUAI; ZHAO, HONGWEI

    2013-01-01

    Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869

  16. Cognitive Deficits and Positively Biased Self-Perceptions in Children with ADHD

    ERIC Educational Resources Information Center

    McQuade, Julia D.; Tomb, Meghan; Hoza, Betsy; Waschbusch, Daniel A.; Hurt, Elizabeth A.; Vaughn, Aaron J.

    2011-01-01

    This study examined the relation between cognitive deficits and positive bias in a sample of 272 children with and without Attention Deficit Hyperactivity Disorder (ADHD; 7-12 years old). Results indicated that children with ADHD with and without biased self-perceptions exhibit differences in specific cognitive deficits (executive processes,…

  17. Analysis of Nonresponse Bias in Research for Business Education

    ERIC Educational Resources Information Center

    Bartlett, James E., II; Bartlett, Michelle E.; Reio, Thomas G., Jr.

    2008-01-01

    This research examined the issue of nonresponse bias and how it was reported in nonexperimental quantitative research published in the "Delta Pi Epsilon Journal" between 1995 and 2004. Through content analysis, 85 articles consisting of 91 separate samples were examined. In 72.5% of the cases, possible nonresponse bias was not examined in the…

  18. Investigating the Stability of Four Methods for Estimating Item Bias.

    ERIC Educational Resources Information Center

    Perlman, Carole L.; And Others

    The reliability of item bias estimates was studied for four methods: (1) the transformed delta method; (2) Shepard's modified delta method; (3) Rasch's one-parameter residual analysis; and (4) the Mantel-Haenszel procedure. Bias statistics were computed for each sample using all methods. Data were from administration of multiple-choice items from…

  19. A multitasking finite state architecture for computer control of an electric powertrain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burba, J.C.

    1984-01-01

    Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexitiesmore » of excitation variable sampling in this environment are also considered.« less

  20. Sampling designs matching species biology produce accurate and affordable abundance indices

    PubMed Central

    Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff

    2013-01-01

    Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which raised capture probabilities. The grid design was least biased (−10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (−17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations. PMID:24392290

  1. Effect of pulsed laser irradiation on the structural and the magnetic properties of NiMn/Co exchange bias system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanan, Senthilnathan; Diebolder, Rolf; Hibst, Raimund

    2008-04-01

    We report about the influence of pulsed laser irradiation on the structural and magnetic properties of NiMn/Co thin films. Rocking curve measurements showed a significant improvement of the (111) texture of NiMn after laser irradiation which was accompanied by grain growth. We have studied the ordering transition in as-prepared and irradiated (laser fluence of 0.15 J/cm{sup 2}) samples during subsequent annealing. The onset of the fcc to fct phase transformation occurs at 325 deg. C irrespective of laser irradiation. Exchange bias fields for the laser irradiated samples are higher than those of the as-prepared samples. The observed increase in themore » exchange bias field for laser irradiated samples has been attributed to the increased grain size and the improved (111) texture of the NiMn layer after laser irradiation.« less

  2. Short-term memory for responses: the "choose-small" effect.

    PubMed Central

    Fetterman, J G; MacEwen, D

    1989-01-01

    Pigeons' short-term memory for fixed-ratio requirements was assessed using a delayed symbolic matching-to-sample procedure. Different choices were reinforced after fixed-ratio 10 and fixed-ratio 40 requirements, and delays of 0, 5, or 20 s were sometimes placed between sample ratios and choice. All birds made disproportionate numbers of responses to the small-ratio choice alternative when delays were interposed between ratios and choice, and this bias increased as a function of delay. Preference for the small fixed-ratio alternative was also observed on "no-sample" trials, during which the choice alternatives were presented without a prior sample ratio. This "choose-small" bias is analogous to results obtained by Spetch and Wilkie (1983) with event duration as the discriminative stimulus. The choose-small bias was attenuated when the houselight was turned on during delays, but overall accuracy was not influenced systematically by the houselight manipulation. PMID:2584917

  3. Evaluation of quality-control data collected by the U.S. Geological Survey for routine water-quality activities at the Idaho National Laboratory and vicinity, southeastern Idaho, 2002-08

    USGS Publications Warehouse

    Rattray, Gordon W.

    2014-01-01

    Quality-control (QC) samples were collected from 2002 through 2008 by the U.S. Geological Survey, in cooperation with the U.S. Department of Energy, to ensure data robustness by documenting the variability and bias of water-quality data collected at surface-water and groundwater sites at and near the Idaho National Laboratory. QC samples consisted of 139 replicates and 22 blanks (approximately 11 percent of the number of environmental samples collected). Measurements from replicates were used to estimate variability (from field and laboratory procedures and sample heterogeneity), as reproducibility and reliability, of water-quality measurements of radiochemical, inorganic, and organic constituents. Measurements from blanks were used to estimate the potential contamination bias of selected radiochemical and inorganic constituents in water-quality samples, with an emphasis on identifying any cross contamination of samples collected with portable sampling equipment. The reproducibility of water-quality measurements was estimated with calculations of normalized absolute difference for radiochemical constituents and relative standard deviation (RSD) for inorganic and organic constituents. The reliability of water-quality measurements was estimated with pooled RSDs for all constituents. Reproducibility was acceptable for all constituents except dissolved aluminum and total organic carbon. Pooled RSDs were equal to or less than 14 percent for all constituents except for total organic carbon, which had pooled RSDs of 70 percent for the low concentration range and 4.4 percent for the high concentration range. Source-solution and equipment blanks were measured for concentrations of tritium, strontium-90, cesium-137, sodium, chloride, sulfate, and dissolved chromium. Field blanks were measured for the concentration of iodide. No detectable concentrations were measured from the blanks except for strontium-90 in one source solution and one equipment blank collected in September and October 2004, respectively. The detectable concentrations of strontium-90 in the blanks probably were from a small source of strontium-90 contamination or large measurement variability, or both. Order statistics and the binomial probability distribution were used to estimate the magnitude and extent of any potential contamination bias of tritium, strontium-90, cesium-137, sodium, chloride, sulfate, dissolved chromium, and iodide in water-quality samples. These statistical methods indicated that, with (1) 87 percent confidence, contamination bias of cesium-137 and sodium in 60 percent of water-quality samples was less than the minimum detectable concentration or reporting level; (2) 92‒94 percent confidence, contamination bias of tritium, strontium-90, chloride, sulfate, and dissolved chromium in 70 percent of water-quality samples was less than the minimum detectable concentration or reporting level; and (3) 75 percent confidence, contamination bias of iodide in 50 percent of water-quality samples was less than the reporting level for iodide. These results support the conclusion that contamination bias of water-quality samples from sample processing, storage, shipping, and analysis was insignificant and that cross-contamination of perched groundwater samples collected with bailers during 2002–08 was insignificant.

  4. Did hydrographic sampling capture global and regional deep ocean heat content trends accurately between 1990-2010?

    NASA Astrophysics Data System (ADS)

    Garry, Freya; McDonagh, Elaine; Blaker, Adam; Roberts, Chris; Desbruyères, Damien; King, Brian

    2017-04-01

    Estimates of heat content change in the deep oceans (below 2000 m) over the last thirty years are obtained from temperature measurements made by hydrographic survey ships. Cruises occupy the same tracks across an ocean basin approximately every 5+ years. Measurements may not be sufficiently frequent in time or space to allow accurate evaluation of total ocean heat content (OHC) and its rate of change. It is widely thought that additional deep ocean sampling will also aid understanding of the mechanisms for OHC change on annual to decadal timescales, including how OHC varies regionally under natural and anthropogenically forced climate change. Here a 0.25˚ ocean model is used to investigate the magnitude of uncertainties and biases that exist in estimates of deep ocean temperature change from hydrographic sections due to their infrequent timing and sparse spatial distribution during 1990 - 2010. Biases in the observational data may be due to lack of spatial coverage (not enough sections covering the basin), lack of data between occupations (typically 5-10 years apart) and due to occupations not closely spanning the time period of interest. Between 1990 - 2010, the modelled biases globally are comparatively small in the abyssal ocean below 3500 m although regionally certain biases in heat flux into the 4000 - 6000 m layer can be up to 0.05 Wm-2. Biases in the heat flux into the deep 2000 - 4000 m layer due to either temporal or spatial sampling uncertainties are typically much larger and can be over 0.1 Wm-2 across an ocean. Overall, 82% of the warming trend below 2000 m is captured by observational-style sampling in the model. However, at 2500 m (too deep for additional temperature information to be inferred from upper ocean Argo) less than two thirds of the magnitude of the global warming trend is obtained, and regionally large biases exist in the Atlantic, Southern and Indian Oceans, highlighting the need for widespread improved deep ocean temperature sampling. In addition to bias due to infrequent sampling, moving the timings of occupations by a few months generates relatively large uncertainty due to intra-annual variability in deep ocean model temperature, further strengthening the case for high temporal frequency observations in the deep ocean (as could be achieved using deep ocean autonomous float technologies). Biases due to different uncertainties can have opposing signs and differ in relative importance both regionally and with depth revealing the importance of reducing all uncertainties (both spatial and temporal) simultaneously in future deep ocean observing design.

  5. A study examining the bias of albumin and albumin/creatinine ratio measurements in urine.

    PubMed

    Jacobson, Beryl E; Seccombe, David W; Katayev, Alex; Levin, Adeera

    2015-10-01

    The objective of the study was to examine the bias of albumin and albumin/creatinine (ACR) measurements in urine. Pools of normal human urine were augmented with purified human serum albumin to generate a series of 12 samples covering the clinical range of interest for the measurement of ACR. Albumin and creatinine concentrations in these samples were analyzed three times on each of 3 days by 24 accredited laboratories in Canada and the USA. Reference values (RV) for albumin measurements were assigned by a liquid chromatography-tandem mass spectrometry (LC-MS/MS) comparative method and gravimetrically. Ten random urine samples (check samples) were analyzed as singlets and albumin and ACR values reported according to the routine practices of each laboratory. Augmented urine pools were shown to be commutable. Gravimetrically assigned target values were corrected for the presence of endogenous albumin using the LC-MS/MS comparative method. There was excellent agreement between the RVs as assigned by these two methods. All laboratory medians demonstrated a negative bias for the measurement of albumin in urine over the concentration range examined. The magnitude of this bias tended to decrease with increasing albumin concentrations. At baseline, only 10% of the patient ACR values met a performance limit of RV ± 15%. This increased to 84% and 86% following post-analytical correction for albumin and creatinine calibration bias, respectively. International organizations should take a leading role in the standardization of albumin measurements in urine. In the interim, accuracy based urine quality control samples may be used by clinical laboratories for monitoring the accuracy of their urinary albumin measurements.

  6. CoCrMo cellular structures made by Electron Beam Melting studied by local tomography and finite element modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petit, Clémence; Maire, Eric, E-mail: eric.maire@insa-lyon.fr; Meille, Sylvain

    The work focuses on the structural and mechanical characterization of Co-Cr-Mo cellular samples with cubic pore structure made by Electron Beam Melting (EBM). X-ray tomography was used to characterize the architecture of the sample. High resolution images were also obtained thanks to local tomography in which the specimen is placed close to the X-ray source. These images enabled to observe some defects due to the fabrication process: small pores in the solid phase, partially melted particles attached to the surface. Then, in situ compression tests were performed in the tomograph. The images of the deformed sample show a progressive bucklingmore » of the vertical struts leading to final fracture. The deformation initiated where the defects were present in the strut i.e. in regions with reduced local thickness. The finite element modelling confirmed the high stress concentrations of these weak points leading to the fracture of the sample. - Highlights: • CoCrMo samples fabricated by Electron Beam Melting (EBM) process are considered. • X-ray Computed Tomography is used to observe the structure of the sample. • The mechanical properties are tested thanks to an in situ test in the tomograph. • A finite element model is developed to model the mechanical behaviour.« less

  7. Estimating time-dependent ROC curves using data under prevalent sampling.

    PubMed

    Li, Shanshan

    2017-04-15

    Prevalent sampling is frequently a convenient and economical sampling technique for the collection of time-to-event data and thus is commonly used in studies of the natural history of a disease. However, it is biased by design because it tends to recruit individuals with longer survival times. This paper considers estimation of time-dependent receiver operating characteristic curves when data are collected under prevalent sampling. To correct the sampling bias, we develop both nonparametric and semiparametric estimators using extended risk sets and the inverse probability weighting techniques. The proposed estimators are consistent and converge to Gaussian processes, while substantial bias may arise if standard estimators for right-censored data are used. To illustrate our method, we analyze data from an ovarian cancer study and estimate receiver operating characteristic curves that assess the accuracy of the composite markers in distinguishing subjects who died within 3-5 years from subjects who remained alive. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scolnic, D.; Kessler, R., E-mail: dscolnic@kicp.uchicago.edu, E-mail: kessler@kicp.uchicago.edu

    Simulations of Type Ia supernovae (SNe Ia) surveys are a critical tool for correcting biases in the analysis of SNe Ia to infer cosmological parameters. Large-scale Monte Carlo simulations include a thorough treatment of observation history, measurement noise, intrinsic scatter models, and selection effects. In this Letter, we improve simulations with a robust technique to evaluate the underlying populations of SN Ia color and stretch that correlate with luminosity. In typical analyses, the standardized SN Ia brightness is determined from linear “Tripp” relations between the light curve color and luminosity and between stretch and luminosity. However, this solution produces Hubblemore » residual biases because intrinsic scatter and measurement noise result in measured color and stretch values that do not follow the Tripp relation. We find a 10 σ bias (up to 0.3 mag) in Hubble residuals versus color and 5 σ bias (up to 0.2 mag) in Hubble residuals versus stretch in a joint sample of 920 spectroscopically confirmed SN Ia from PS1, SNLS, SDSS, and several low- z surveys. After we determine the underlying color and stretch distributions, we use simulations to predict and correct the biases in the data. We show that removing these biases has a small impact on the low- z sample, but reduces the intrinsic scatter σ {sub int} from 0.101 to 0.083 in the combined PS1, SNLS, and SDSS sample. Past estimates of the underlying populations were too broad, leading to a small bias in the equation of state of dark energy w of Δ w = 0.005.« less

  9. Temporal performance of amorphous selenium mammography detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Bo; Zhao Wei

    2005-01-01

    We investigated temporal performance of amorphous selenium (a-Se) detectors specifically designed for mammographic imaging. Our goal is to quantify the inherent lag and ghosting of a-Se photoconductor as a function of imaging conditions. Two small area electroded a-Se samples, one positively and the other negatively biased on the entrance side of x rays, were used in the experiments. The study of lag and ghosting was performed by delivering a number of raw exposures as experienced in screening mammography to the samples at different electric field strength E{sub Se} while measuring the current through the a-Se sample. Ghosting at different operationalmore » conditions was quantified as the percentage x-ray sensitivity (x-ray generated photocurrent measured from the sample) reduction compared to before irradiation. Lag was determined by measuring the residual current of a-Se at a given time after the end of each x-ray exposure. Both lag and ghosting were measured as a function of E{sub Se} and cumulative exposure. The values of E{sub Se} used in our experiments ranged from 1 to 20 V/{mu}m. It was found that ghosting increases with exposure and decreases with E{sub Se} for both samples because of the dominant effect of recombination between trapped electrons and x-ray generated holes. Lag on the other hand has different dependence on E{sub Se} and cumulative exposure. At E{sub Se}{<=}10 V/{mu}m, the first frame lag for both samples changed slowly with cumulative exposure, with a range of 0.2%-1.7% for the positively biased sample and 0.5%-8% for the negatively biased sample. Overall the positively biased sample has better temporal performance than the negatively biased sample due to the lower density of trapped electrons. The impact of time interval between exposures on the temporal performance was also investigated. Recovery of ghosting with longer time interval was observed, which was attributed to the neutralization of trapped electrons by injected holes through dark current.« less

  10. Assessing Intellectual Ability with a Minimum of Cultural Bias for Two Samples of Metis and Indian Children.

    ERIC Educational Resources Information Center

    West, Lloyd Wilbert

    An investigation was designed to ascertain the effects of cultural background on selected intelligence tests and to identify instruments which validly measure intellectual ability with a minimum of cultural bias. A battery of tests, selected for factor analytic study, was administered and replicated at four grade levels to a sample of Metis and…

  11. Sample Size Bias in Judgments of Perceptual Averages

    ERIC Educational Resources Information Center

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  12. Dark Energy Survey Year 1 results: cross-correlation redshifts - methods and systematics characterization

    NASA Astrophysics Data System (ADS)

    Gatti, M.; Vielzeuf, P.; Davis, C.; Cawthon, R.; Rau, M. M.; DeRose, J.; De Vicente, J.; Alarcon, A.; Rozo, E.; Gaztanaga, E.; Hoyle, B.; Miquel, R.; Bernstein, G. M.; Bonnett, C.; Carnero Rosell, A.; Castander, F. J.; Chang, C.; da Costa, L. N.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Roodman, A.; Sevilla-Noarbe, I.; Troxel, M. A.; Wechsler, R. H.; Asorey, J.; Davis, T. M.; Glazebrook, K.; Hinton, S. R.; Lewis, G.; Lidman, C.; Macaulay, E.; Möller, A.; O'Neill, C. R.; Sommer, N. E.; Uddin, S. A.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Allam, S.; Annis, J.; Bechtol, K.; Brooks, D.; Burke, D. L.; Carollo, D.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; DePoy, D. L.; Desai, S.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Hoormann, J. K.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Li, T. S.; Lima, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Reil, K.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sheldon, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, B. E.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.

    2018-06-01

    We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing source galaxies from the Dark Energy Survey Year 1 sample with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We apply the method to two photo-z codes run in our simulated data: Bayesian Photometric Redshift and Directional Neighbourhood Fitting. We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering versus photo-zs. The systematic uncertainty in the mean redshift bias of the source galaxy sample is Δz ≲ 0.02, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.

  13. Quality-Assurance Data for Routine Water Analyses by the U.S. Geological Survey Laboratory in Troy, New York - July 2001 Through June 2003

    USGS Publications Warehouse

    Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.

    2009-01-01

    The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's Lab Master data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality control samples analyzed from July 2001 through June 2003. Results for the quality-control samples for 19 analytical procedures were evaluated for bias and precision. Control charts indicate that data for six of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, chloride, magnesium, nitrate (ion chromatography), potassium, and sodium. The calcium procedure was biased throughout the analysis period for the high-concentration sample, but was within control limits. The total monomeric aluminum and fluoride procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The total aluminum, pH, specific conductance, and sulfate procedures were biased for the high-concentration and low-concentration samples, but were within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 16 of 18 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for the dissolved organic carbon or specific conductance procedures. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 18 of the 21 analytes. At least 90 percent of the samples met data-quality objectives for all procedures except total monomeric aluminum (83 percent of samples met objectives), total aluminum (76 percent of samples met objectives), ammonium (73 percent of samples met objectives), dissolved organic carbon (86 percent of samples met objectives), and nitrate (81 percent of samples met objectives). The data-quality objective was not met for the nitrite procedure. Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated satisfactory or above data quality over the time period, with most performance ratings for each sample in the good-to-excellent range. The N-sample (nutrient constituents) analysis had one unsatisfactory rating for the ammonium procedure in one study. The T-sample (trace constituents) analysis had one unsatisfactory rating for the magnesium procedure and one marginal rating for the potassium procedure in one study and one unsatisfactory rating for the sodium procedure in another. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 90 percent of the samples met data-quality objectives for 10 of the 14 analytes; the exceptions were acid-neutralizing capacity, ammonium, dissolved organic carbon, and sodium. Data-quality objectives were not met in 37 percent of samples analyzed for acid-neutralizing capacity, 28 percent of samples analyzed for dissolved organic carbon, and 30 percent of samples analyzed for sodium. Results indicate a positive bias for the ammonium procedure in one study and a negative bias in another. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 90 percent of the samples analyzed for calcium, chloride, magnesium, pH, potassium, and sodium. Data-quality objectives were met by 78 percent of

  14. Anomalous behavior of 1/f noise in graphene near the charge neutrality point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeshita, Shunpei; Tanaka, Takahiro; Arakawa, Tomonori

    2016-03-07

    We investigate the noise in single layer graphene devices from equilibrium to far-from equilibrium and found that the 1/f noise shows an anomalous dependence on the source-drain bias voltage (V{sub SD}). While the Hooge's relation is not the case around the charge neutrality point, we found that it is recovered at very low V{sub SD} region. We propose that the depinning of the electron-hole puddles is induced at finite V{sub SD}, which may explain this anomalous noise behavior.

  15. Thermal conductance of and heat generation in tire-pavement interface and effect on aircraft braking

    NASA Technical Reports Server (NTRS)

    Miller, C. D.

    1976-01-01

    A finite-difference analysis was performed on temperature records obtained from a free rolling automotive tire and from pavement surface. A high thermal contact conductance between tire and asphalt was found on a statistical basis. Average slip due to squirming between tire and asphalt was about 1.5 mm. Consequent friction heat was estimated as 64 percent of total power absorbed by bias-ply, belted tire. Extrapolation of results to aircraft tire indicates potential braking improvement by even moderate increase of heat absorbing capacity of runway surface.

  16. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  17. Implicit Racial/Ethnic Bias Among Health Care Professionals and Its Influence on Health Care Outcomes: A Systematic Review.

    PubMed

    Hall, William J; Chapman, Mimi V; Lee, Kent M; Merino, Yesenia M; Thomas, Tainayah W; Payne, B Keith; Eng, Eugenia; Day, Steven H; Coyne-Beasley, Tamera

    2015-12-01

    In the United States, people of color face disparities in access to health care, the quality of care received, and health outcomes. The attitudes and behaviors of health care providers have been identified as one of many factors that contribute to health disparities. Implicit attitudes are thoughts and feelings that often exist outside of conscious awareness, and thus are difficult to consciously acknowledge and control. These attitudes are often automatically activated and can influence human behavior without conscious volition. We investigated the extent to which implicit racial/ethnic bias exists among health care professionals and examined the relationships between health care professionals' implicit attitudes about racial/ethnic groups and health care outcomes. To identify relevant studies, we searched 10 computerized bibliographic databases and used a reference harvesting technique. We assessed eligibility using double independent screening based on a priori inclusion criteria. We included studies if they sampled existing health care providers or those in training to become health care providers, measured and reported results on implicit racial/ethnic bias, and were written in English. We included a total of 15 studies for review and then subjected them to double independent data extraction. Information extracted included the citation, purpose of the study, use of theory, study design, study site and location, sampling strategy, response rate, sample size and characteristics, measurement of relevant variables, analyses performed, and results and findings. We summarized study design characteristics, and categorized and then synthesized substantive findings. Almost all studies used cross-sectional designs, convenience sampling, US participants, and the Implicit Association Test to assess implicit bias. Low to moderate levels of implicit racial/ethnic bias were found among health care professionals in all but 1 study. These implicit bias scores are similar to those in the general population. Levels of implicit bias against Black, Hispanic/Latino/Latina, and dark-skinned people were relatively similar across these groups. Although some associations between implicit bias and health care outcomes were nonsignificant, results also showed that implicit bias was significantly related to patient-provider interactions, treatment decisions, treatment adherence, and patient health outcomes. Implicit attitudes were more often significantly related to patient-provider interactions and health outcomes than treatment processes. Most health care providers appear to have implicit bias in terms of positive attitudes toward Whites and negative attitudes toward people of color. Future studies need to employ more rigorous methods to examine the relationships between implicit bias and health care outcomes. Interventions targeting implicit attitudes among health care professionals are needed because implicit bias may contribute to health disparities for people of color.

  18. Implicit Racial/Ethnic Bias Among Health Care Professionals and Its Influence on Health Care Outcomes: A Systematic Review

    PubMed Central

    Hall, William J.; Lee, Kent M.; Merino, Yesenia M.; Thomas, Tainayah W.; Payne, B. Keith; Eng, Eugenia; Day, Steven H.; Coyne-Beasley, Tamera

    2015-01-01

    Background. In the United States, people of color face disparities in access to health care, the quality of care received, and health outcomes. The attitudes and behaviors of health care providers have been identified as one of many factors that contribute to health disparities. Implicit attitudes are thoughts and feelings that often exist outside of conscious awareness, and thus are difficult to consciously acknowledge and control. These attitudes are often automatically activated and can influence human behavior without conscious volition. Objectives. We investigated the extent to which implicit racial/ethnic bias exists among health care professionals and examined the relationships between health care professionals’ implicit attitudes about racial/ethnic groups and health care outcomes. Search Methods. To identify relevant studies, we searched 10 computerized bibliographic databases and used a reference harvesting technique. Selection Criteria. We assessed eligibility using double independent screening based on a priori inclusion criteria. We included studies if they sampled existing health care providers or those in training to become health care providers, measured and reported results on implicit racial/ethnic bias, and were written in English. Data Collection and Analysis. We included a total of 15 studies for review and then subjected them to double independent data extraction. Information extracted included the citation, purpose of the study, use of theory, study design, study site and location, sampling strategy, response rate, sample size and characteristics, measurement of relevant variables, analyses performed, and results and findings. We summarized study design characteristics, and categorized and then synthesized substantive findings. Main Results. Almost all studies used cross-sectional designs, convenience sampling, US participants, and the Implicit Association Test to assess implicit bias. Low to moderate levels of implicit racial/ethnic bias were found among health care professionals in all but 1 study. These implicit bias scores are similar to those in the general population. Levels of implicit bias against Black, Hispanic/Latino/Latina, and dark-skinned people were relatively similar across these groups. Although some associations between implicit bias and health care outcomes were nonsignificant, results also showed that implicit bias was significantly related to patient–provider interactions, treatment decisions, treatment adherence, and patient health outcomes. Implicit attitudes were more often significantly related to patient–provider interactions and health outcomes than treatment processes. Conclusions. Most health care providers appear to have implicit bias in terms of positive attitudes toward Whites and negative attitudes toward people of color. Future studies need to employ more rigorous methods to examine the relationships between implicit bias and health care outcomes. Interventions targeting implicit attitudes among health care professionals are needed because implicit bias may contribute to health disparities for people of color. PMID:26469668

  19. A Critical Assessment of Bias in Survey Studies Using Location-Based Sampling to Recruit Patrons in Bars

    PubMed Central

    Morrison, Christopher; Lee, Juliet P.; Gruenewald, Paul J.; Marzell, Miesha

    2015-01-01

    Location-based sampling is a method to obtain samples of people within ecological contexts relevant to specific public health outcomes. Random selection increases generalizability, however in some circumstances (such as surveying bar patrons) recruitment conditions increase risks of sample bias. We attempted to recruit representative samples of bars and patrons in six California cities, but low response rates precluded meaningful analysis. A systematic review of 24 similar studies revealed that none addressed the key shortcomings of our study. We recommend steps to improve studies that use location-based sampling: (i) purposively sample places of interest, (ii) utilize recruitment strategies appropriate to the environment, and (iii) provide full information on response rates at all levels of sampling. PMID:26574657

  20. Measuring Coverage in MNCH: Design, Implementation, and Interpretation Challenges Associated with Tracking Vaccination Coverage Using Household Surveys

    PubMed Central

    Cutts, Felicity T.; Izurieta, Hector S.; Rhoda, Dale A.

    2013-01-01

    Vaccination coverage is an important public health indicator that is measured using administrative reports and/or surveys. The measurement of vaccination coverage in low- and middle-income countries using surveys is susceptible to numerous challenges. These challenges include selection bias and information bias, which cannot be solved by increasing the sample size, and the precision of the coverage estimate, which is determined by the survey sample size and sampling method. Selection bias can result from an inaccurate sampling frame or inappropriate field procedures and, since populations likely to be missed in a vaccination coverage survey are also likely to be missed by vaccination teams, most often inflates coverage estimates. Importantly, the large multi-purpose household surveys that are often used to measure vaccination coverage have invested substantial effort to reduce selection bias. Information bias occurs when a child's vaccination status is misclassified due to mistakes on his or her vaccination record, in data transcription, in the way survey questions are presented, or in the guardian's recall of vaccination for children without a written record. There has been substantial reliance on the guardian's recall in recent surveys, and, worryingly, information bias may become more likely in the future as immunization schedules become more complex and variable. Finally, some surveys assess immunity directly using serological assays. Sero-surveys are important for assessing public health risk, but currently are unable to validate coverage estimates directly. To improve vaccination coverage estimates based on surveys, we recommend that recording tools and practices should be improved and that surveys should incorporate best practices for design, implementation, and analysis. PMID:23667334

  1. Stability and bias of classification rates in biological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.; Hines, J.E.

    1990-01-01

    We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases

  2. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    USGS Publications Warehouse

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.

  3. Visual search attentional bias modification reduced social phobia in adolescents.

    PubMed

    De Voogd, E L; Wiers, R W; Prins, P J M; Salemink, E

    2014-06-01

    An attentional bias for negative information plays an important role in the development and maintenance of (social) anxiety and depression, which are highly prevalent in adolescence. Attention Bias Modification (ABM) might be an interesting tool in the prevention of emotional disorders. The current study investigated whether visual search ABM might affect attentional bias and emotional functioning in adolescents. A visual search task was used as a training paradigm; participants (n = 16 adolescents, aged 13-16) had to repeatedly identify the only smiling face in a 4 × 4 matrix of negative emotional faces, while participants in the control condition (n = 16) were randomly allocated to one of three placebo training versions. An assessment version of the task was developed to directly test whether attentional bias changed due to the training. Self-reported anxiety and depressive symptoms and self-esteem were measured pre- and post-training. After two sessions of training, the ABM group showed a significant decrease in attentional bias for negative information and self-reported social phobia, while the control group did not. There were no effects of training on depressive mood or self-esteem. No correlation between attentional bias and social phobia was found, which raises questions about the validity of the attentional bias assessment task. Also, the small sample size precludes strong conclusions. Visual search ABM might be beneficial in changing attentional bias and social phobia in adolescents, but further research with larger sample sizes and longer follow-up is needed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Brief time course of trait anxiety-related attentional bias to fear-conditioned stimuli: Evidence from the dual-RSVP task.

    PubMed

    Booth, Robert W

    2017-03-01

    Attentional bias to threat is a much-studied feature of anxiety; it is typically assessed using response time (RT) tasks such as the dot probe. Findings regarding the time course of attentional bias have been inconsistent, possibly because RT tasks are sensitive to processes downstream of attention. Attentional bias was assessed using an accuracy-based task in which participants detected a single digit in two simultaneous rapid serial visual presentation (RSVP) streams of letters. Before the target, two coloured shapes were presented simultaneously, one in each RSVP stream; one shape had previously been associated with threat through Pavlovian fear conditioning. Attentional bias was indicated wherever participants identified targets in the threat's RSVP stream more accurately than targets in the other RSVP stream. In 87 unselected undergraduates, trait anxiety only predicted attentional bias when the target was presented immediately following the shapes, i.e. 160 ms later; by 320 ms the bias had disappeared. This suggests attentional bias in anxiety can be extremely brief and transitory. This initial study utilised an analogue sample, and was unable to physiologically verify the efficacy of the conditioning. The next steps will be to verify these results in a sample of diagnosed anxious patients, and to use alternative threat stimuli. The results of studies using response time to assess the time course of attentional bias may partially reflect later processes such as decision making and response preparation. This may limit the efficacy of therapies aiming to retrain attentional biases using response time tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A "Scientific Diversity" Intervention to Reduce Gender Bias in a Sample of Life Scientists.

    PubMed

    Moss-Racusin, Corinne A; van der Toorn, Jojanneke; Dovidio, John F; Brescoll, Victoria L; Graham, Mark J; Handelsman, Jo

    2016-01-01

    Mounting experimental evidence suggests that subtle gender biases favoring men contribute to the underrepresentation of women in science, technology, engineering, and mathematics (STEM), including many subfields of the life sciences. However, there are relatively few evaluations of diversity interventions designed to reduce gender biases within the STEM community. Because gender biases distort the meritocratic evaluation and advancement of students, interventions targeting instructors' biases are particularly needed. We evaluated one such intervention, a workshop called "Scientific Diversity" that was consistent with an established framework guiding the development of diversity interventions designed to reduce biases and was administered to a sample of life science instructors (N = 126) at several sessions of the National Academies Summer Institute for Undergraduate Education held nationwide. Evidence emerged indicating the efficacy of the "Scientific Diversity" workshop, such that participants were more aware of gender bias, expressed less gender bias, and were more willing to engage in actions to reduce gender bias 2 weeks after participating in the intervention compared with 2 weeks before the intervention. Implications for diversity interventions aimed at reducing gender bias and broadening the participation of women in the life sciences are discussed. © 2016 C. A. Moss-Racusin et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  6. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  7. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    PubMed

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in correcting for nonresponse bias is questionable. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  8. Wrong, but useful: regional species distribution models may not be improved by range-wide data under biased sampling.

    PubMed

    El-Gabbas, Ahmed; Dormann, Carsten F

    2018-02-01

    Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor ("prior") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data-poor regions.

  9. AFRL Nanotechnology Initiative: Hybrid Nanomaterials in Photonic Crystal Cavities for Multi-Spectral Infrared Detector Arrays

    DTIC Science & Technology

    2010-03-31

    the determination of bias - dependent EQD activation energies by Arrhenius plots. Fig. 4 shows the EQD activation energies as a function of bias ...consistent with thermal activation and field-assisted tunneling through the triangular potential barrier provided at higher bias voltages. In...contrast, three bias - dependent regions of the EQD activation energy can be identified for the doped samples, as shown in Fig. 4. In Region I (< 0.4 V

  10. Associations among Negative Parenting, Attention Bias to Anger, and Social Anxiety among Youth

    PubMed Central

    Gulley, Lauren; Oppenheimer, Caroline; Hankin, Benjamin

    2014-01-01

    Theories of affective learning suggest that early experiences contribute to emotional disorders by influencing the development of processing biases for negative emotional stimuli. Although studies show that physically abused children preferentially attend to angry faces, it is unclear whether youth exposed to more typical aspects of negative parenting would exhibit the same type of bias. The current studies extend previous research by linking observed negative parenting styles (e.g. authoritarian) and behaviors (e.g. criticism and negative affect) to attention bias for angry faces in both a psychiatrically enriched (ages 11–17 years; N = 60) and a general community (ages 9–15 years; N = 75) sample of youth. In addition, the association between observed negative parenting (e.g. authoritarian style and negative affect) and youth social anxiety was mediated by attention bias for angry faces in the general community sample. Overall, findings provide preliminary support for theories of affective learning and risk for psychopathology among youth. PMID:23815705

  11. Associations among negative parenting, attention bias to anger, and social anxiety among youth.

    PubMed

    Gulley, Lauren D; Oppenheimer, Caroline W; Hankin, Benjamin L

    2014-02-01

    Theories of affective learning suggest that early experiences contribute to emotional disorders by influencing the development of processing biases for negative emotional stimuli. Although studies have shown that physically abused children preferentially attend to angry faces, it is unclear whether youth exposed to more typical aspects of negative parenting exhibit the same type of bias. The current studies extend previous research by linking observed negative parenting styles (e.g., authoritarian) and behaviors (e.g., criticism and negative affect) to attention bias for angry faces in both a psychiatrically enriched (ages 11-17 years; N = 60) and a general community (ages 9-15 years; N = 75) sample of youth. In addition, the association between observed negative parenting (e.g., authoritarian style and negative affect) and youth social anxiety was mediated by attention bias for angry faces in the general community sample. Overall, findings provide preliminary support for theories of affective learning and risk for psychopathology among youth.

  12. Quality-Assurance Data for Routine Water Analyses by the U.S. Geological Survey Laboratory in Troy, New York--July 1999 through June 2001

    USGS Publications Warehouse

    Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.

    2006-01-01

    The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's LabMaster data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality-control samples analyzed from July 1999 through June 2001. Results for the quality-control samples for 18 analytical procedures were evaluated for bias and precision. Control charts indicate that data for eight of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, total aluminum, calcium, chloride and nitrate (ion chromatography and colormetric method) and sulfate. The total aluminum and dissolved organic carbon procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits. The calcium and specific conductance procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The magnesium procedure was biased for the high-concentration and low concentration samples, but was within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 14 of 15 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 17 of the 18 analytes. At least 90 percent of the samples met data-quality objectives for all analytes except ammonium (81 percent of samples met objectives), chloride (75 percent of samples met objectives), and sodium (86 percent of samples met objectives). Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality over the time period, with most ratings for each sample in the good to excellent range. The P-sample (low-ionic-strength constituents) analysis had one satisfactory rating for the specific conductance procedure in one study. The T-sample (trace constituents) analysis had one satisfactory rating for the aluminum procedure in one study and one unsatisfactory rating for the sodium procedure in another. The remainder of the samples had good or excellent ratings for each study. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 89 percent of the samples met data-quality objectives for 10 of the 14 analytes; the exceptions were ammonium, total aluminum, dissolved organic carbon, and sodium. Results indicate a positive bias for the ammonium procedure in all studies. Data-quality objectives were not met in 50 percent of samples analyzed for total aluminum, 38 percent of samples analyzed for dissolved organic carbon, and 27 percent of samples analyzed for sodium. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 91 percent of the samples analyzed for calcium, chloride, fluoride, magnesium, pH, potassium, and sulfate. Data-quality objectives were met by 75 percent of the samples analyzed for sodium and 58 percent of the samples analyzed for specific conductance.

  13. Ethnic Group Bias in Intelligence Test Items.

    ERIC Educational Resources Information Center

    Scheuneman, Janice

    In previous studies of ethnic group bias in intelligence test items, the question of bias has been confounded with ability differences between the ethnic group samples compared. The present study is based on a conditional probability model in which an unbiased item is defined as one where the probability of a correct response to an item is the…

  14. Size-biased distributions in the generalized beta distribution family, with applications to forestry

    Treesearch

    Mark J. Ducey; Jeffrey H. Gove

    2015-01-01

    Size-biased distributions arise in many forestry applications, as well as other environmental, econometric, and biomedical sampling problems. We examine the size-biased versions of the generalized beta of the first kind, generalized beta of the second kind and generalized gamma distributions. These distributions include, as special cases, the Dagum (Burr Type III),...

  15. The Bias in Favor of Venture Capital Finance in U.S. Entrepreneurial Education: At the Expense of Trade Credit

    ERIC Educational Resources Information Center

    Clement, Thomas; LeMire, Steven; Silvernagel, Craig

    2015-01-01

    The authors examine whether U.S. college-level entrepreneurship education demonstrates a bias favoring venture capital (VC) financing while marginalizing trade credit financing, and the resulting impact on entrepreneurship students. A sample of U.S. business textbooks and survey data from entrepreneurship students reveals a significant bias toward…

  16. Why is "S" a Biased Estimate of [sigma]?

    ERIC Educational Resources Information Center

    Sanqui, Jose Almer T.; Arnholt, Alan T.

    2011-01-01

    This article describes a simulation activity that can be used to help students see that the estimator "S" is a biased estimator of [sigma]. The activity can be implemented using either a statistical package such as R, Minitab, or a Web applet. In the activity, the students investigate and compare the bias of "S" when sampling from different…

  17. Active earth pressure model tests versus finite element analysis

    NASA Astrophysics Data System (ADS)

    Pietrzak, Magdalena

    2017-06-01

    The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.

  18. Arbitrary-order corrections for finite-time drift and diffusion coefficients

    NASA Astrophysics Data System (ADS)

    Anteneodo, C.; Riera, R.

    2009-09-01

    We address a standard class of diffusion processes with linear drift and quadratic diffusion coefficients. These contributions to dynamic equations can be directly drawn from data time series. However, real data are constrained to finite sampling rates and therefore it is crucial to establish a suitable mathematical description of the required finite-time corrections. Based on Itô-Taylor expansions, we present the exact corrections to the finite-time drift and diffusion coefficients. These results allow to reconstruct the real hidden coefficients from the empirical estimates. We also derive higher-order finite-time expressions for the third and fourth conditional moments that furnish extra theoretical checks for this class of diffusion models. The analytical predictions are compared with the numerical outcomes of representative artificial time series.

  19. A new third order finite volume weighted essentially non-oscillatory scheme on tetrahedral meshes

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Qiu, Jianxian

    2017-11-01

    In this paper a third order finite volume weighted essentially non-oscillatory scheme is designed for solving hyperbolic conservation laws on tetrahedral meshes. Comparing with other finite volume WENO schemes designed on tetrahedral meshes, the crucial advantages of such new WENO scheme are its simplicity and compactness with the application of only six unequal size spatial stencils for reconstructing unequal degree polynomials in the WENO type spatial procedures, and easy choice of the positive linear weights without considering the topology of the meshes. The original innovation of such scheme is to use a quadratic polynomial defined on a big central spatial stencil for obtaining third order numerical approximation at any points inside the target tetrahedral cell in smooth region and switch to at least one of five linear polynomials defined on small biased/central spatial stencils for sustaining sharp shock transitions and keeping essentially non-oscillatory property simultaneously. By performing such new procedures in spatial reconstructions and adopting a third order TVD Runge-Kutta time discretization method for solving the ordinary differential equation (ODE), the new scheme's memory occupancy is decreased and the computing efficiency is increased. So it is suitable for large scale engineering requirements on tetrahedral meshes. Some numerical results are provided to illustrate the good performance of such scheme.

  20. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  1. Chemosensory Communication of Gender Information: Masculinity Bias in Body Odor Perception and Femininity Bias Introduced by Chemosignals During Social Perception.

    PubMed

    Mutic, Smiljana; Moellers, Eileen M; Wiesmann, Martin; Freiherr, Jessica

    2015-01-01

    Human body odor is a source of important social information. In this study, we explore whether the sex of an individual can be established based on smelling axillary odor and whether exposure to male and female odors biases chemosensory and social perception. In a double-blind, pseudo-randomized application, 31 healthy normosmic heterosexual male and female raters were exposed to male and female chemosignals (odor samples of 27 heterosexual donors collected during a cardio workout) and a no odor sample. Recipients rated chemosensory samples on a masculinity-femininity scale and provided intensity, familiarity and pleasantness ratings. Additionally, the modulation of social perception (gender-neutral faces and personality attributes) and affective introspection (mood) by male and female chemosignals was assessed. Male and female axillary odors were rated as rather masculine, regardless of the sex of the donor. As opposed to the masculinity bias in the odor perception, a femininity bias modulating social perception appeared. A facilitated femininity detection in gender-neutral faces and personality attributes in male and female chemosignals appeared. No chemosensory effect on mood of the rater was observed. The results are discussed with regards to the use of male and female chemosignals in affective and social communication.

  2. Chemosensory Communication of Gender Information: Masculinity Bias in Body Odor Perception and Femininity Bias Introduced by Chemosignals During Social Perception

    PubMed Central

    Mutic, Smiljana; Moellers, Eileen M.; Wiesmann, Martin; Freiherr, Jessica

    2016-01-01

    Human body odor is a source of important social information. In this study, we explore whether the sex of an individual can be established based on smelling axillary odor and whether exposure to male and female odors biases chemosensory and social perception. In a double-blind, pseudo-randomized application, 31 healthy normosmic heterosexual male and female raters were exposed to male and female chemosignals (odor samples of 27 heterosexual donors collected during a cardio workout) and a no odor sample. Recipients rated chemosensory samples on a masculinity-femininity scale and provided intensity, familiarity and pleasantness ratings. Additionally, the modulation of social perception (gender-neutral faces and personality attributes) and affective introspection (mood) by male and female chemosignals was assessed. Male and female axillary odors were rated as rather masculine, regardless of the sex of the donor. As opposed to the masculinity bias in the odor perception, a femininity bias modulating social perception appeared. A facilitated femininity detection in gender-neutral faces and personality attributes in male and female chemosignals appeared. No chemosensory effect on mood of the rater was observed. The results are discussed with regards to the use of male and female chemosignals in affective and social communication. PMID:26834656

  3. Biases in affective forecasting and recall in individuals with depression and anxiety symptoms.

    PubMed

    Wenze, Susan J; Gunthert, Kathleen C; German, Ramaris E

    2012-07-01

    The authors used experience sampling to investigate biases in affective forecasting and recall in individuals with varying levels of depression and anxiety symptoms. Participants who were higher in depression symptoms demonstrated stronger (more pessimistic) negative mood prediction biases, marginally stronger negative mood recall biases, and weaker (less optimistic) positive mood prediction and recall biases. Participants who were higher in anxiety symptoms demonstrated stronger negative mood prediction biases, but positive mood prediction biases that were on par with those who were lower in anxiety. Anxiety symptoms were not associated with mood recall biases. Neither depression symptoms nor anxiety symptoms were associated with bias in event prediction. Their findings fit well with the tripartite model of depression and anxiety. Results are also consistent with the conceptualization of anxiety as a "forward-looking" disorder, and with theories that emphasize the importance of pessimism and general negative information processing in depressive functioning.

  4. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    PubMed Central

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305

  5. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  6. How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions

    PubMed Central

    Cunningham, Michael R.; Baumeister, Roy F.

    2016-01-01

    The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger et al., 2010). Meta-analyses are supposed to reduce bias in literature reviews. Carter et al.’s (2015) meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and Funnel Plot Asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test and Precision Effect Estimate with Standard Error (PEESE) procedures. Despite these serious problems, the Carter et al. (2015) meta-analysis results actually indicate that there is a real depletion effect – contrary to their title. PMID:27826272

  7. Polarized neutron reflectivity study of a thermally treated MnIr/CoFe exchange bias system.

    PubMed

    Awaji, Naoki; Miyajima, Toyoo; Doi, Shuuichi; Nomura, Kenji

    2010-12-01

    It has recently been found that the exchange bias of a MnIr/CoFe system can be increased significantly by adding a thermal treatment to the bilayer. To reveal the origin of the higher exchange bias, we performed polarized neutron reflectivity measurements at the JRR-3 neutron source. The magnetization vector near the MnIr/CoFe interface for thermally treated samples differed from that for samples without the treatment. We propose a model in which the pinned spin area at the interface is extended due to the increased roughness and atomic interdiffusion that result from the thermal treatment.

  8. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    PubMed Central

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  9. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach.

    PubMed

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447-2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8-30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics.

  10. Underestimating the effects of spatial heterogeneity due to individual movement and spatial scale: infectious disease as an example

    USGS Publications Warehouse

    Cross, Paul C.; Caillaud, Damien; Heisey, Dennis M.

    2013-01-01

    Many ecological and epidemiological studies occur in systems with mobile individuals and heterogeneous landscapes. Using a simulation model, we show that the accuracy of inferring an underlying biological process from observational data depends on movement and spatial scale of the analysis. As an example, we focused on estimating the relationship between host density and pathogen transmission. Observational data can result in highly biased inference about the underlying process when individuals move among sampling areas. Even without sampling error, the effect of host density on disease transmission is underestimated by approximately 50 % when one in ten hosts move among sampling areas per lifetime. Aggregating data across larger regions causes minimal bias when host movement is low, and results in less biased inference when movement rates are high. However, increasing data aggregation reduces the observed spatial variation, which would lead to the misperception that a spatially targeted control effort may not be very effective. In addition, averaging over the local heterogeneity will result in underestimating the importance of spatial covariates. Minimizing the bias due to movement is not just about choosing the best spatial scale for analysis, but also about reducing the error associated with using the sampling location as a proxy for an individual’s spatial history. This error associated with the exposure covariate can be reduced by choosing sampling regions with less movement, including longitudinal information of individuals’ movements, or reducing the window of exposure by using repeated sampling or younger individuals.

  11. Diagnostic test accuracy and prevalence inferences based on joint and sequential testing with finite population sampling.

    PubMed

    Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O

    2004-07-30

    The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.

  12. Vibronic Boson Sampling: Generalized Gaussian Boson Sampling for Molecular Vibronic Spectra at Finite Temperature.

    PubMed

    Huh, Joonsuk; Yung, Man-Hong

    2017-08-07

    Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.

  13. A Bayesian state-space formulation of dynamic occupancy models

    USGS Publications Warehouse

    Royle, J. Andrew; Kery, M.

    2007-01-01

    Species occurrence and its dynamic components, extinction and colonization probabilities, are focal quantities in biogeography and metapopulation biology, and for species conservation assessments. It has been increasingly appreciated that these parameters must be estimated separately from detection probability to avoid the biases induced by nondetection error. Hence, there is now considerable theoretical and practical interest in dynamic occupancy models that contain explicit representations of metapopulation dynamics such as extinction, colonization, and turnover as well as growth rates. We describe a hierarchical parameterization of these models that is analogous to the state-space formulation of models in time series, where the model is represented by two components, one for the partially observable occupancy process and another for the observations conditional on that process. This parameterization naturally allows estimation of all parameters of the conventional approach to occupancy models, but in addition, yields great flexibility and extensibility, e.g., to modeling heterogeneity or latent structure in model parameters. We also highlight the important distinction between population and finite sample inference; the latter yields much more precise estimates for the particular sample at hand. Finite sample estimates can easily be obtained using the state-space representation of the model but are difficult to obtain under the conventional approach of likelihood-based estimation. We use R and Win BUGS to apply the model to two examples. In a standard analysis for the European Crossbill in a large Swiss monitoring program, we fit a model with year-specific parameters. Estimates of the dynamic parameters varied greatly among years, highlighting the irruptive population dynamics of that species. In the second example, we analyze route occupancy of Cerulean Warblers in the North American Breeding Bird Survey (BBS) using a model allowing for site-specific heterogeneity in model parameters. The results indicate relatively low turnover and a stable distribution of Cerulean Warblers which is in contrast to analyses of counts of individuals from the same survey that indicate important declines. This discrepancy illustrates the inertia in occupancy relative to actual abundance. Furthermore, the model reveals a declining patch survival probability, and increasing turnover, toward the edge of the range of the species, which is consistent with metapopulation perspectives on the genesis of range edges. Given detection/non-detection data, dynamic occupancy models as described here have considerable potential for the study of distributions and range dynamics.

  14. A comparison of two sampling designs for fish assemblage assessment in a large river

    USGS Publications Warehouse

    Kiraly, Ian A.; Coghlan, Stephen M.; Zydlewski, Joseph D.; Hayes, Daniel

    2014-01-01

    We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.

  15. Monitoring the aftermath of Flint drinking water contamination crisis: Another case of sampling bias?

    PubMed

    Goovaerts, Pierre

    2017-07-15

    The delay in reporting high levels of lead in Flint drinking water, following the city's switch to the Flint River as its water supply, was partially caused by the biased selection of sampling sites away from the lead pipe network. Since Flint returned to its pre-crisis source of drinking water, the State has been monitoring water lead levels (WLL) at selected "sentinel" sites. In a first phase that lasted two months, 739 residences were sampled, most of them bi-weekly, to determine the general health of the distribution system and to track temporal changes in lead levels. During the same period, water samples were also collected through a voluntary program whereby concerned citizens received free testing kits and conducted sampling on their own. State officials relied on the former data to demonstrate the steady improvement in water quality. A recent analysis of data collected by voluntary sampling revealed, however, an opposite trend with lead levels increasing over time. This paper looks at potential sampling bias to explain such differences. Although houses with higher WLL were more likely to be sampled repeatedly, voluntary sampling turned out to reproduce fairly well the main characteristics (i.e. presence of lead service lines (LSL), construction year) of Flint housing stock. State-controlled sampling was less representative; e.g., sentinel sites with LSL were mostly built between 1935 and 1950 in lower poverty areas, which might hamper our ability to disentangle the effects of LSL and premise plumbing (lead fixtures and pipes present within old houses) on WLL. Also, there was no sentinel site with LSL in two of the most impoverished wards, including where the percentage of children with elevated blood lead levels tripled following the switch in water supply. Correcting for sampling bias narrowed the gap between sampling programs, yet overall temporal trends are still opposite. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Estimating Ω from Galaxy Redshifts: Linear Flow Distortions and Nonlinear Clustering

    NASA Astrophysics Data System (ADS)

    Bromley, B. C.; Warren, M. S.; Zurek, W. H.

    1997-02-01

    We propose a method to determine the cosmic mass density Ω from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion σv. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter β = Ω0.6/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of β to within ~10%. An analysis of IRAS 1.2 Jy galaxies yields β=0.8+0.4-0.3 at a scale of 1000 km s-1, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of β derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the IRAS data.

  17. Parametric study of statistical bias in laser Doppler velocimetry

    NASA Technical Reports Server (NTRS)

    Gould, Richard D.; Stevenson, Warren H.; Thompson, H. Doyle

    1989-01-01

    Analytical studies have often assumed that LDV velocity bias depends on turbulence intensity in conjunction with one or more characteristic time scales, such as the time between validated signals, the time between data samples, and the integral turbulence time-scale. These parameters are presently varied independently, in an effort to quantify the biasing effect. Neither of the post facto correction methods employed is entirely accurate. The mean velocity bias error is found to be nearly independent of data validation rate.

  18. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  19. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  20. The drift-diffusion interpretation of the electron current within the organic semiconductor characterized by the bulk single energy trap level

    NASA Astrophysics Data System (ADS)

    Cvikl, B.

    2010-01-01

    The closed solution for the internal electric field and the total charge density derived in the drift-diffusion approximation for the model of a single layer organic semiconductor structure characterized by the bulk shallow single trap-charge energy level is presented. The solutions for two examples of electric field boundary conditions are tested on room temperature current density-voltage data of the electron conducting aluminum/tris(8-hydroxyquinoline aluminum/calcium structure [W. Brütting et al., Synth. Met. 122, 99 (2001)] for which jexp∝Va3.4, within the interval of bias 0.4 V≤Va≤7. In each case investigated the apparent electron mobility determined at given bias is distributed within a given, finite interval of values. The bias dependence of the logarithm of their lower limit, i.e., their minimum values, is found to be in each case, to a good approximation, proportional to the square root of the applied electric field. On account of the bias dependence as incorporated in the minimum value of the apparent electron mobility the spatial distribution of the organic bulk electric field as well as the total charge density turn out to be bias independent. The first case investigated is based on the boundary condition of zero electric field at the electron injection interface. It is shown that for minimum valued apparent mobilities, the strong but finite accumulation of electrons close to the anode is obtained, which characterize the inverted space charge limited current (SCLC) effect. The second example refers to the internal electric field allowing for self-adjustment of its boundary values. The total electron charge density is than found typically to be of U shape, which may, depending on the parameters, peak at both or at either Alq3 boundary. It is this example in which the proper SCLC effect is consequently predicted. In each of the above two cases, the calculations predict the minimum values of the electron apparent mobility, which substantially exceed the corresponding published measurements. For this reason the effect of the drift term alone is additionally investigated. On the basis of the published empirical electron mobilities and the diffusion term revoked, it is shown that the steady state electron current density within the Al/Alq3 (97 nm)/Ca single layer organic structure may well be pictured within the drift-only interpretation of the charge carriers within the Alq3 organic characterized by the single (shallow) trap energy level. In order to arrive at this result, it is necessary that the nonzero electric field, calculated to exist at the electron injecting Alq3/Ca boundary, is to be appropriately accounted for in the computation.

  1. External quality-assurance results for the National Atmospheric Deposition Program / National Trends Network and Mercury Deposition Network, 2004

    USGS Publications Warehouse

    Wetherbee, Gregory A.; Latysh, Natalie E.; Greene, Shannon M.

    2006-01-01

    The U.S. Geological Survey (USGS) used five programs to provide external quality-assurance monitoring for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) and two programs to provide external quality-assurance monitoring for the NADP/Mercury Deposition Network (NADP/MDN) during 2004. An intersite-comparison program was used to estimate accuracy and precision of field-measured pH and specific-conductance. The variability and bias of NADP/NTN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using the sample-handling evaluation (SHE), field-audit, and interlaboratory-comparison programs. Overall variability of NADP/NTN data was estimated using a collocated-sampler program. Variability and bias of NADP/MDN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using a system-blank program and an interlaboratory-comparison program. In two intersite-comparison studies, approximately 89 percent of NADP/NTN site operators met the pH measurement accuracy goals, and 94.7 to 97.1 percent of NADP/NTN site operators met the accuracy goals for specific conductance. Field chemistry measurements were discontinued by NADP at the end of 2004. As a result, the USGS intersite-comparison program also was discontinued at the end of 2004. Variability and bias in NADP/NTN data due to sample handling and shipping were estimated from paired-sample concentration differences and specific conductance differences obtained for the SHE program. Median absolute errors (MAEs) equal to less than 3 percent were indicated for all measured analytes except potassium and hydrogen ion. Positive bias was indicated for most of the measured analytes except for calcium, hydrogen ion and specific conductance. Negative bias for hydrogen ion and specific conductance indicated loss of hydrogen ion and decreased specific conductance from contact of the sample with the collector bucket. Field-audit results for 2004 indicate dissolved analyte loss in more than one-half of NADP/NTN wet-deposition samples for all analytes except chloride. Concentrations of contaminants also were estimated from field-audit data. On the basis of 2004 field-audit results, at least 25 percent of the 2004 NADP/NTN concentrations for sodium, potassium, and chloride were lower than the maximum sodium, potassium, and chloride contamination likely to be found in 90 percent of the samples with 90-percent confidence. Variability and bias in NADP/NTN data attributed to chemical analysis by the NADP Central Analytical Laboratory (CAL) were comparable to the variability and bias estimated for other laboratories participating in the interlaboratory-comparison program for all analytes. Variability in NADP/NTN ammonium data evident in 2002-03 was reduced substantially during 2004. Sulfate, hydrogen-ion, and specific conductance data reported by CAL during 2004 were positively biased. A significant (a = 0.05) bias was identified for CAL sodium, potassium, ammonium, and nitrate data, but the absolute values of the median differences for these analytes were less than the method detection limits. No detections were reported for CAL analyses of deionized-water samples, indicating that contamination was not a problem for CAL. Control charts show that CAL data were within statistical control during at least 90 percent of 2004. Most 2004 CAL interlaboratory-comparison results for synthetic wet-deposition solutions were within ?10 percent of the most probable values (MPVs) for solution concentrations except for chloride, nitrate, sulfate, and specific conductance results from one sample in November and one specific conductance result in December. Overall variability of NADP/NTN wet-deposition measurements was estimated during water year 2004 by the median absolute errors for weekly wet-deposition sample concentrations and precipitation measurements for tw

  2. Reducing bias in survival under non-random temporary emigration

    USGS Publications Warehouse

    Peñaloza, Claudia L.; Kendall, William L.; Langtimm, Catherine Ann

    2014-01-01

    Despite intensive monitoring, temporary emigration from the sampling area can induce bias severe enough for managers to discard life-history parameter estimates toward the terminus of the times series (terminal bias). Under random temporary emigration unbiased parameters can be estimated with CJS models. However, unmodeled Markovian temporary emigration causes bias in parameter estimates and an unobservable state is required to model this type of emigration. The robust design is most flexible when modeling temporary emigration, and partial solutions to mitigate bias have been identified, nonetheless there are conditions were terminal bias prevails. Long-lived species with high adult survival and highly variable non-random temporary emigration present terminal bias in survival estimates, despite being modeled with the robust design and suggested constraints. Because this bias is due to uncertainty about the fate of individuals that are undetected toward the end of the time series, solutions should involve using additional information on survival status or location of these individuals at that time. Using simulation, we evaluated the performance of models that jointly analyze robust design data and an additional source of ancillary data (predictive covariate on temporary emigration, telemetry, dead recovery, or auxiliary resightings) in reducing terminal bias in survival estimates. The auxiliary resighting and predictive covariate models reduced terminal bias the most. Additional telemetry data was effective at reducing terminal bias only when individuals were tracked for a minimum of two years. High adult survival of long-lived species made the joint model with recovery data ineffective at reducing terminal bias because of small-sample bias. The naïve constraint model (last and penultimate temporary emigration parameters made equal), was the least efficient, though still able to reduce terminal bias when compared to an unconstrained model. Joint analysis of several sources of data improved parameter estimates and reduced terminal bias. Efforts to incorporate or acquire such data should be considered by researchers and wildlife managers, especially in the years leading up to status assessments of species of interest. Simulation modeling is a very cost effective method to explore the potential impacts of using different sources of data to produce high quality demographic data to inform management.

  3. Effect of shell thickness on the exchange bias blocking temperature and coercivity in Co-CoO core-shell nanoparticles

    NASA Astrophysics Data System (ADS)

    Thomas, S.; Reethu, K.; Thanveer, T.; Myint, M. T. Z.; Al-Harthi, S. H.

    2017-08-01

    The exchange bias blocking temperature distribution of naturally oxidized Co-CoO core-shell nanoparticles exhibits two distinct signatures. These are associated with the existence of two magnetic entities which are responsible for the temperature dependence of an exchange bias field. One is from the CoO grains which undergo thermally activated magnetization reversal. The other is from the disordered spins at the Co-CoO interface which exhibits spin-glass-like behavior. We investigated the oxide shell thickness dependence of the exchange bias effect. For particles with a 3 nm thick CoO shell, the predominant contribution to the temperature dependence of exchange bias is the interfacial spin-glass layer. On increasing the shell thickness to 4 nm, the contribution from the spin-glass layer decreases, while upholding the antiferromagnetic grain contribution. For samples with a 4 nm CoO shell, the exchange bias training was minimal. On the other hand, 3 nm samples exhibited both the training effect and a peak in coercivity at an intermediate set temperature Ta. This is explained using a magnetic core-shell model including disordered spins at the interface.

  4. Nonresponse patterns in the Federal Waterfowl Hunter Questionnaire Survey

    USGS Publications Warehouse

    Pendleton, G.W.

    1992-01-01

    I analyzed data from the 1984 and 1986 Federal Waterfowl Hunter Questionnaire Survey (WHQS) to estimate the rate of return of name and address contact cards, to evaluate the efficiency of the Survey's stratification scheme, and to investigate potential sources of bias due to nonresponse at the contact card and questionnaire stages of the Survey. Median response at the contact card stage was 0.200 in 1984 and 0.208 in 1986, but was lower than 0.100 for many sample post offices. Large portions of the intended sample contributed little to the final estimates in the Survey. Differences in response characteristics between post office size strata were detected, but size strata were confounded with contact card return rates; differences among geographic zones within states were more pronounced. Large biases in harvest and hunter activity due to nonresponse were not found; however, consistent smaller magnitude biases were found. Bias in estimates of the proportion of active hunters was the most pronounced effect of nonresponse. All of the sources of bias detected would produce overestimates of harvest and activity. Redesigning the WHQS, including use of a complete list of waterfowl hunters and resampling nonrespondents, would be needed to reduce nonresponse bias.

  5. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  6. Quantum And Relativistic Protocols For Secure Multi-Party Computation

    NASA Astrophysics Data System (ADS)

    Colbeck, Roger

    2009-11-01

    After a general introduction, the thesis is divided into four parts. In the first, we discuss the task of coin tossing, principally in order to highlight the effect different physical theories have on security in a straightforward manner, but, also, to introduce a new protocol for non-relativistic strong coin tossing. This protocol matches the security of the best protocol known to date while using a conceptually different approach to achieve the task. In the second part variable bias coin tossing is introduced. This is a variant of coin tossing in which one party secretly chooses one of two biased coins to toss. It is shown that this can be achieved with unconditional security for a specified range of biases, and with cheat-evident security for any bias. We also discuss two further protocols which are conjectured to be unconditionally secure for any bias. The third section looks at other two-party secure computations for which, prior to our work, protocols and no-go theorems were unknown. We introduce a general model for such computations, and show that, within this model, a wide range of functions are impossible to compute securely. We give explicit cheating attacks for such functions. In the final chapter we discuss the task of expanding a private random string, while dropping the usual assumption that the protocol's user trusts her devices. Instead we assume that all quantum devices are supplied by an arbitrarily malicious adversary. We give two protocols that we conjecture securely perform this task. The first allows a private random string to be expanded by a finite amount, while the second generates an arbitrarily large expansion of such a string.

  7. Traits, States, and Attentional Gates: Temperament and Threat Relevance as Predictors of Attentional Bias to Social Threat

    PubMed Central

    Helzer, Erik G.; Connor-Smith, Jennifer K.; Reed, Marjorie A.

    2009-01-01

    This study investigated the influence of situational and dispositional factors on attentional biases toward social threat, and the impact of these attentional biases on distress in a sample of adolescents. Results suggest greater biases for personally-relevant threat cues, as individuals reporting high social stress were vigilant to subliminal social threat cues, but not physical threat cues, and those reporting low social stress showed no attentional biases. Individual differences in fearful temperament and attentional control interacted to influence attentional biases, with fearful temperament predicting biases to supraliminal social threat only for individuals with poor attentional control. Multivariate analyses exploring relations between attentional biases for social threat and symptoms of anxiety and depression revealed that attentional biases alone were rarely related to symptoms. However, biases did interact with social stress, fearful temperament, and attentional control to predict distress. Results are discussed in terms of automatic and effortful cognitive mechanisms underlying threat cue processing. PMID:18791905

  8. Fermi-edge exciton-polaritons in doped semiconductor microcavities with finite hole mass

    NASA Astrophysics Data System (ADS)

    Pimenov, Dimitri; von Delft, Jan; Glazman, Leonid; Goldstein, Moshe

    2017-10-01

    The coupling between a 2D semiconductor quantum well and an optical cavity gives rise to combined light-matter excitations, the exciton-polaritons. These were usually measured when the conduction band is empty, making the single polariton physics a simple single-body problem. The situation is dramatically different in the presence of a finite conduction-band population, where the creation or annihilation of a single exciton involves a many-body shakeup of the Fermi sea. Recent experiments in this regime revealed a strong modification of the exciton-polariton spectrum. Previous theoretical studies concerned with nonzero Fermi energy mostly relied on the approximation of an immobile valence-band hole with infinite mass, which is appropriate for low-mobility samples only; for high-mobility samples, one needs to consider a mobile hole with large but finite mass. To bridge this gap, we present an analytical diagrammatic approach and tackle a model with short-ranged (screened) electron-hole interaction, studying it in two complementary regimes. We find that the finite hole mass has opposite effects on the exciton-polariton spectra in the two regimes: in the first, where the Fermi energy is much smaller than the exciton binding energy, excitonic features are enhanced by the finite mass. In the second regime, where the Fermi energy is much larger than the exciton binding energy, finite mass effects cut off the excitonic features in the polariton spectra, in qualitative agreement with recent experiments.

  9. Evaluation of quality-control data collected by the U.S. Geological Survey for routine water-quality activities at the Idaho National Laboratory, Idaho, 1996–2001

    USGS Publications Warehouse

    Rattray, Gordon W.

    2012-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Department of Energy, collects surface water and groundwater samples at and near the Idaho National Laboratory as part of a routine, site-wide, water-quality monitoring program. Quality-control samples are collected as part of the program to ensure and document the quality of environmental data. From 1996 to 2001, quality-control samples consisting of 204 replicates and 27 blanks were collected at sampling sites. Paired measurements from replicates were used to calculate variability (as reproducibility and reliability) from sample collection and analysis of radiochemical, chemical, and organic constituents. Measurements from field and equipment blanks were used to estimate the potential contamination bias of constituents. The reproducibility of measurements of constituents was calculated from paired measurements as the normalized absolute difference (NAD) or the relative standard deviation (RSD). The NADs and RSDs, as well as paired measurements with censored or estimated concentrations for which NADs and RSDs were not calculated, were compared to specified criteria to determine if the paired measurements had acceptable reproducibility. If the percentage of paired measurements with acceptable reproducibility for a constituent was greater than or equal to 90 percent, then the reproducibility for that constituent was considered acceptable. The percentage of paired measurements with acceptable reproducibility was greater than or equal to 90 percent for all constituents except orthophosphate (89 percent), zinc (80 percent), hexavalent chromium (53 percent), and total organic carbon (TOC; 38 percent). The low reproducibility for orthophosphate and zinc was attributed to calculation of RSDs for replicates with low concentrations of these constituents. The low reproducibility for hexavalent chromium and TOC was attributed to the inability to preserve hexavalent chromium in water samples and high variability with the analytical method for TOC. The reliability of measurements of constituents was estimated from pooled RSDs that were calculated for discrete concentration ranges for each constituent. Pooled RSDs of 15 to 33 percent were calculated for low concentrations of gross-beta radioactivity, strontium-90, ammonia, nitrite, orthophosphate, nickel, selenium, zinc, tetrachloroethene, and toluene. Lower pooled RSDs of 0 to 12 percent were calculated for all other concentration ranges of these constituents, and for all other constituents, except for one concentration range for gross-beta radioactivity, chloride, and nitrate + nitrite; two concentration ranges for hexavalent chromium; and TOC. Pooled RSDs for the 50 to 60 picocuries per liter concentration range of gross-beta radioactivity (reported as cesium-137) and the 10 to 60 milligrams per liter (mg/L) concentration range of nitrate + nitrite (reported as nitrogen [N]) were 17 percent. Chloride had a pooled RSD of 14 percent for the 20 to less than 60 mg/L concentration range. High pooled RSDs of 40 and 51 percent were calculated for two concentration ranges for hexavalent chromium and of 60 percent for TOC. Measurements from (1) field blanks were used to estimate the potential bias associated with environmental samples from sample collection and analysis, (2) equipment blanks were used to estimate the potential bias from cross contamination of samples collected from wells where portable sampling equipment was used, and (3) a source-solution blank was used to verify that the deionized water source-solution was free of the constituents of interest. If more than one measurement was available, the bias was estimated using order statistics and the binomial probability distribution. The source-solution blank had a detectable concentration of hexavalent chromium of 2 micrograms per liter. If this bias was from a source other than the source solution, then about 84 percent of the 117 hexavalent chromium measurements from environmental samples could have a bias of 10 percent or more. Of the 14 field blanks that were collected, only chloride (0.2 milligrams per liter) and ammonia (0.03 milligrams per liter as nitrogen), in one blank each, had detectable concentrations. With an estimated confidencelevel of 95 percent, at least 80 percent of the 1,987 chloride concentrations measured from all environmental samples had a potential bias of less than 8 percent. The ammonia bias, which may have occurred at the analytical laboratory, could produce a potential bias of 5-100 percent in eight potentially affected ammonia measurements. Of the 11 equipment blanks that were collected, chloride was detected in 4 of these blanks, sodium in 3 blanks, and sulfate and hexavalent chromium were each detected in 1 blank. The concentration of hexavalent chromium in the equipment blank was the same concentration as in the source-solution blank collected on the same day, which indicates that the hexavalent chromium in the equipment blank is probably from a source other than the portable sampling equipment, such as the sample bottles or the source-solution water itself. The potential bias for chloride, sodium, and sulfate measurements was estimated for environmental samples that were collected using portable sampling equipment. For chloride, it was estimated with 93 percent confidence that at least 80 percent of the measurements had a bias of less than 18 percent. For sodium and sulfate, it was estimated with 91 percent confidence that at least 70 percent of the measurements had a bias of less than 12 and 5 percent, respectively.

  10. A procedure for removing the effect of response bias errors from waterfowl hunter questionnaire responses

    USGS Publications Warehouse

    Atwood, E.L.

    1958-01-01

    Response bias errors are studied by comparing questionnaire responses from waterfowl hunters using four large public hunting areas with actual hunting data from these areas during two hunting seasons. To the extent that the data permit, the sources of the error in the responses were studied and the contribution of each type to the total error was measured. Response bias errors, including both prestige and memory bias, were found to be very large as compared to non-response and sampling errors. Good fits were obtained with the seasonal kill distribution of the actual hunting data and the negative binomial distribution and a good fit was obtained with the distribution of total season hunting activity and the semi-logarithmic curve. A comparison of the actual seasonal distributions with the questionnaire response distributions revealed that the prestige and memory bias errors are both positive. The comparisons also revealed the tendency for memory bias errors to occur at digit frequencies divisible by five and for prestige bias errors to occur at frequencies which are multiples of the legal daily bag limit. A graphical adjustment of the response distributions was carried out by developing a smooth curve from those frequency classes not included in the predictable biased frequency classes referred to above. Group averages were used in constructing the curve, as suggested by Ezekiel [1950]. The efficiency of the technique described for reducing response bias errors in hunter questionnaire responses on seasonal waterfowl kill is high in large samples. The graphical method is not as efficient in removing response bias errors in hunter questionnaire responses on seasonal hunting activity where an average of 60 percent was removed.

  11. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  12. Electrical manipulation of edge states in graphene and the effect on quantum Hall transport

    NASA Astrophysics Data System (ADS)

    Ostahie, B.; NiÅ£ǎ, M.; Aldea, A.

    2015-04-01

    We investigate the properties of Dirac electrons in a finite graphene sample under a perpendicular magnetic field that emerge when an in-plane electric bias is also applied. The numerical analysis of the Hofstadter spectrum and of the edge-type wave functions evidence the presence of shortcut edge states that appear under the influence of the electric field. The states are characterized by a specific spatial distribution, which follows only partially the perimeter, and exhibit ridges that connect opposite sides of the graphene plaquette. Two kinds of such states have been found in different regions of the spectrum, and their particular spatial localization is shown along with the diamagnetic moments that reveal their chirality. By simulating a four-lead Hall device, we investigate the transport properties and observe unconventional plateaus of the integer quantum Hall effect that are associated with the presence of the shortcut edge states. We show the contributions of the novel states to the conductance matrix that determine the new transport properties. The shortcut edge states resulting from the splitting of the n =0 Landau level represent a special case, giving rise to nontrivial transverse and longitudinal resistance.

  13. Worry or craving? A selective review of evidence for food-related attention biases in obese individuals, eating-disorder patients, restrained eaters and healthy samples.

    PubMed

    Werthmann, Jessica; Jansen, Anita; Roefs, Anne

    2015-05-01

    Living in an 'obesogenic' environment poses a serious challenge for weight maintenance. However, many people are able to maintain a healthy weight indicating that not everybody is equally susceptible to the temptations of this food environment. The way in which someone perceives and reacts to food cues, that is, cognitive processes, could underlie differences in susceptibility. An attention bias for food could be such a cognitive factor that contributes to overeating. However, an attention bias for food has also been implicated with restrained eating and eating-disorder symptomatology. The primary aim of the present review was to determine whether an attention bias for food is specifically related to obesity while also reviewing evidence for attention biases in eating-disorder patients, restrained eaters and healthy-weight individuals. Another aim was to systematically examine how selective attention for food relates (causally) to eating behaviour. Current empirical evidence on attention bias for food within obese samples, eating-disorder patients, and, even though to a lesser extent, in restrained eaters is contradictory. However, present experimental studies provide relatively consistent evidence that an attention bias for food contributes to subsequent food intake. This review highlights the need to distinguish not only between different (temporal) attention bias components, but also to take different motivations (craving v. worry) and their impact on attentional processing into account. Overall, the current state of research suggests that biased attention could be one important cognitive mechanism by which the food environment tempts us into overeating.

  14. Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching.

    PubMed

    Austin, Peter C

    2017-02-01

    Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term "bias due to incomplete matching" to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used.

  15. Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching

    PubMed Central

    2016-01-01

    Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term “bias due to incomplete matching” to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used. PMID:25038071

  16. Evaluating cost-efficiency and accuracy of hunter harvest survey designs

    USGS Publications Warehouse

    Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.

    2011-01-01

    Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.

  17. Potential sources of analytical bias and error in selected trace element data-quality analyses

    USGS Publications Warehouse

    Paul, Angela P.; Garbarino, John R.; Olsen, Lisa D.; Rosen, Michael R.; Mebane, Christopher A.; Struzeski, Tedmund M.

    2016-09-28

    Potential sources of analytical bias and error associated with laboratory analyses for selected trace elements where concentrations were greater in filtered samples than in paired unfiltered samples were evaluated by U.S. Geological Survey (USGS) Water Quality Specialists in collaboration with the USGS National Water Quality Laboratory (NWQL) and the Branch of Quality Systems (BQS).Causes for trace-element concentrations in filtered samples to exceed those in associated unfiltered samples have been attributed to variability in analytical measurements, analytical bias, sample contamination either in the field or laboratory, and (or) sample-matrix chemistry. These issues have not only been attributed to data generated by the USGS NWQL but have been observed in data generated by other laboratories. This study continues the evaluation of potential analytical bias and error resulting from matrix chemistry and instrument variability by evaluating the performance of seven selected trace elements in paired filtered and unfiltered surface-water and groundwater samples collected from 23 sampling sites of varying chemistries from six States, matrix spike recoveries, and standard reference materials.Filtered and unfiltered samples have been routinely analyzed on separate inductively coupled plasma-mass spectrometry instruments. Unfiltered samples are treated with hydrochloric acid (HCl) during an in-bottle digestion procedure; filtered samples are not routinely treated with HCl as part of the laboratory analytical procedure. To evaluate the influence of HCl on different sample matrices, an aliquot of the filtered samples was treated with HCl. The addition of HCl did little to differentiate the analytical results between filtered samples treated with HCl from those samples left untreated; however, there was a small, but noticeable, decrease in the number of instances where a particular trace-element concentration was greater in a filtered sample than in the associated unfiltered sample for all trace elements except selenium. Accounting for the small dilution effect (2 percent) from the addition of HCl, as required for the in-bottle digestion procedure for unfiltered samples, may be one step toward decreasing the number of instances where trace-element concentrations are greater in filtered samples than in paired unfiltered samples.The laboratory analyses of arsenic, cadmium, lead, and zinc did not appear to be influenced by instrument biases. These trace elements showed similar results on both instruments used to analyze filtered and unfiltered samples. The results for aluminum and molybdenum tended to be higher on the instrument designated to analyze unfiltered samples; the results for selenium tended to be lower. The matrices used to prepare calibration standards were different for the two instruments. The instrument designated for the analysis of unfiltered samples was calibrated using standards prepared in a nitric:hydrochloric acid (HNO3:HCl) matrix. The instrument designated for the analysis of filtered samples was calibrated using standards prepared in a matrix acidified only with HNO3. Matrix chemistry may have influenced the responses of aluminum, molybdenum, and selenium on the two instruments. The best analytical practice is to calibrate instruments using calibration standards prepared in matrices that reasonably match those of the samples being analyzed.Filtered and unfiltered samples were spiked over a range of trace-element concentrations from less than 1 to 58 times ambient concentrations. The greater the magnitude of the trace-element spike concentration relative to the ambient concentration, the greater the likelihood spike recoveries will be within data control guidelines (80–120 percent). Greater variability in spike recoveries occurred when trace elements were spiked at concentrations less than 10 times the ambient concentration. Spike recoveries that were considerably lower than 90 percent often were associated with spiked concentrations substantially lower than what was present in the ambient sample. Because the main purpose of spiking natural water samples with known quantities of a particular analyte is to assess possible matrix effects on analytical results, the results of this study stress the importance of spiking samples at concentrations that are reasonably close to what is expected but sufficiently high to exceed analytical variability. Generally, differences in spike recovery results between paired filtered and unfiltered samples were minimal when samples were analyzed on the same instrument.Analytical results for trace-element concentrations in ambient filtered and unfiltered samples greater than 10 and 40 μg/L, respectively, were within the data-quality objective for precision of ±25 percent. Ambient trace-element concentrations in filtered samples greater than the long-term method detection limits but less than 10 μg/L failed to meet the data-quality objective for precision for at least one trace element in about 54 percent of the samples. Similarly, trace-element concentrations in unfiltered samples greater than the long-term method detection limits but less than 40 μg/L failed to meet this data-quality objective for at least one trace-element analysis in about 58 percent of the samples. Although, aluminum and zinc were particularly problematic, limited re-analyses of filtered and unfiltered samples appeared to improve otherwise failed analytical precision.The evaluation of analytical bias using standard reference materials indicate a slight low bias for results for arsenic, cadmium, selenium, and zinc. Aluminum and molybdenum show signs of high bias. There was no observed bias, as determined using the standard reference materials, during the analysis of lead.

  18. Online Reinforcement Learning Using a Probability Density Estimation.

    PubMed

    Agostini, Alejandro; Celaya, Enric

    2017-01-01

    Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.

  19. Medical School Factors Associated with Changes in Implicit and Explicit Bias Against Gay and Lesbian People among 3492 Graduating Medical Students.

    PubMed

    Phelan, Sean M; Burke, Sara E; Hardeman, Rachel R; White, Richard O; Przedworski, Julia; Dovidio, John F; Perry, Sylvia P; Plankey, Michael; A Cunningham, Brooke; Finstad, Deborah; W Yeazel, Mark; van Ryn, Michelle

    2017-11-01

    Implicit and explicit bias among providers can influence the quality of healthcare. Efforts to address sexual orientation bias in new physicians are hampered by a lack of knowledge of school factors that influence bias among students. To determine whether medical school curriculum, role modeling, diversity climate, and contact with sexual minorities predict bias among graduating students against gay and lesbian people. Prospective cohort study. A sample of 4732 first-year medical students was recruited from a stratified random sample of 49 US medical schools in the fall of 2010 (81% response; 55% of eligible), of which 94.5% (4473) identified as heterosexual. Seventy-eight percent of baseline respondents (3492) completed a follow-up survey in their final semester (spring 2014). Medical school predictors included formal curriculum, role modeling, diversity climate, and contact with sexual minorities. Outcomes were year 4 implicit and explicit bias against gay men and lesbian women, adjusted for bias at year 1. In multivariate models, lower explicit bias against gay men and lesbian women was associated with more favorable contact with LGBT faculty, residents, students, and patients, and perceived skill and preparedness for providing care to LGBT patients. Greater explicit bias against lesbian women was associated with discrimination reported by sexual minority students (b = 1.43 [0.16, 2.71]; p = 0.03). Lower implicit sexual orientation bias was associated with more frequent contact with LGBT faculty, residents, students, and patients (b = -0.04 [-0.07, -0.01); p = 0.008). Greater implicit bias was associated with more faculty role modeling of discriminatory behavior (b = 0.34 [0.11, 0.57); p = 0.004). Medical schools may reduce bias against sexual minority patients by reducing negative role modeling, improving the diversity climate, and improving student preparedness to care for this population.

  20. Limited sampling hampers “big data” estimation of species richness in a tropical biodiversity hotspot

    PubMed Central

    Engemann, Kristine; Enquist, Brian J; Sandel, Brody; Boyle, Brad; Jørgensen, Peter M; Morueta-Holme, Naia; Peet, Robert K; Violle, Cyrille; Svenning, Jens-Christian

    2015-01-01

    Macro-scale species richness studies often use museum specimens as their main source of information. However, such datasets are often strongly biased due to variation in sampling effort in space and time. These biases may strongly affect diversity estimates and may, thereby, obstruct solid inference on the underlying diversity drivers, as well as mislead conservation prioritization. In recent years, this has resulted in an increased focus on developing methods to correct for sampling bias. In this study, we use sample-size-correcting methods to examine patterns of tropical plant diversity in Ecuador, one of the most species-rich and climatically heterogeneous biodiversity hotspots. Species richness estimates were calculated based on 205,735 georeferenced specimens of 15,788 species using the Margalef diversity index, the Chao estimator, the second-order Jackknife and Bootstrapping resampling methods, and Hill numbers and rarefaction. Species richness was heavily correlated with sampling effort, and only rarefaction was able to remove this effect, and we recommend this method for estimation of species richness with “big data” collections. PMID:25692000

  1. Limited sampling hampers "big data" estimation of species richness in a tropical biodiversity hotspot.

    PubMed

    Engemann, Kristine; Enquist, Brian J; Sandel, Brody; Boyle, Brad; Jørgensen, Peter M; Morueta-Holme, Naia; Peet, Robert K; Violle, Cyrille; Svenning, Jens-Christian

    2015-02-01

    Macro-scale species richness studies often use museum specimens as their main source of information. However, such datasets are often strongly biased due to variation in sampling effort in space and time. These biases may strongly affect diversity estimates and may, thereby, obstruct solid inference on the underlying diversity drivers, as well as mislead conservation prioritization. In recent years, this has resulted in an increased focus on developing methods to correct for sampling bias. In this study, we use sample-size-correcting methods to examine patterns of tropical plant diversity in Ecuador, one of the most species-rich and climatically heterogeneous biodiversity hotspots. Species richness estimates were calculated based on 205,735 georeferenced specimens of 15,788 species using the Margalef diversity index, the Chao estimator, the second-order Jackknife and Bootstrapping resampling methods, and Hill numbers and rarefaction. Species richness was heavily correlated with sampling effort, and only rarefaction was able to remove this effect, and we recommend this method for estimation of species richness with "big data" collections.

  2. Normalization Approaches for Removing Systematic Biases Associated with Mass Spectrometry and Label-Free Proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callister, Stephen J.; Barry, Richard C.; Adkins, Joshua N.

    2006-02-01

    Central tendency, linear regression, locally weighted regression, and quantile techniques were investigated for normalization of peptide abundance measurements obtained from high-throughput liquid chromatography-Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR MS). Arbitrary abundances of peptides were obtained from three sample sets, including a standard protein sample, two Deinococcus radiodurans samples taken from different growth phases, and two mouse striatum samples from control and methamphetamine-stressed mice (strain C57BL/6). The selected normalization techniques were evaluated in both the absence and presence of biological variability by estimating extraneous variability prior to and following normalization. Prior to normalization, replicate runs from each sample setmore » were observed to be statistically different, while following normalization replicate runs were no longer statistically different. Although all techniques reduced systematic bias, assigned ranks among the techniques revealed significant trends. For most LC-FTICR MS analyses, linear regression normalization ranked either first or second among the four techniques, suggesting that this technique was more generally suitable for reducing systematic biases.« less

  3. Quantization and anomalous structures in the conductance of Si/SiGe quantum point contacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pock, J. F. von; Salloch, D.; Qiao, G.

    2016-04-07

    Quantum point contacts (QPCs) are fabricated on modulation-doped Si/SiGe heterostructures and ballistic transport is studied at low temperatures. We observe quantized conductance with subband separations up to 4 meV and anomalies in the first conductance plateau at 4e{sup 2}/h. At a temperature of T = 22 mK in the linear transport regime, a weak anomalous kink structure arises close to 0.5(4e{sup 2}/h), which develops into a distinct plateau-like structure as temperature is raised up to T = 4 K. Under magnetic field parallel to the wire up to B = 14 T, the anomaly evolves into the Zeeman spin-split level at 0.5(4e{sup 2}/h), resembling the '0.7 anomaly' in GaAs/AlGaAsmore » QPCs. Additionally, a zero-bias anomaly (ZBA) is observed in nonlinear transport spectroscopy. At T = 22 mK, a parallel magnetic field splits the ZBA peak up into two peaks. At B = 0, elevated temperatures lead to similar splitting, which differs from the behavior of ZBAs in GaAs/AlGaAs QPCs. Under finite dc bias, the differential resistance exhibits additional plateaus approximately at 0.8(4e{sup 2}/h) and 0.2(4e{sup 2}/h) known as '0.85 anomaly' and '0.25 anomaly' in GaAs/AlGaAs QPCs. Unlike the first regular plateau at 4e{sup 2}/h, the 0.2(4e{sup 2}/h) plateau is insensitive to dc bias voltage up to at least V{sub DS} = 80 mV, in-plane magnetic fields up to B = 15 T, and to elevated temperatures up to T = 25 K. We interpret this effect as due to pinching off one of the reservoirs close to the QPC. We do not see any indication of lifting of the valley degeneracy in our samples.« less

  4. Quality-assurance design applied to an assessment of agricultural pesticides in ground water from carbonate bedrock aquifers in the Great Valley of eastern Pennsylvania

    USGS Publications Warehouse

    Breen, Kevin J.

    2000-01-01

    Assessments to determine whether agricultural pesticides are present in ground water are performed by the Commonwealth of Pennsylvania under the aquifer monitoring provisions of the State Pesticides and Ground Water Strategy. Pennsylvania's Department of Agriculture conducts the monitoring and collects samples; the Department of Environmental Protection (PaDEP) Laboratory analyzes the samples to measure pesticide concentration. To evaluate the quality of the measurements of pesticide concentration for a groundwater assessment, a quality-assurance design was developed and applied to a selected assessment area in Pennsylvania. This report describes the quality-assurance design, describes how and where the design was applied, describes procedures used to collect and analyze samples and to evaluate the results, and summarizes the quality assurance results along with the assessment results.The design was applied in an agricultural area of the Delaware River Basin in Berks, Lebanon, Lehigh, and Northampton Counties to evaluate the bias and variability in laboratory results for pesticides. The design—with random spatial and temporal components—included four data-quality objectives for bias and variability. The spatial design was primary and represented an area comprising 30 sampling cells. A quality-assurance sampling frequency of 20 percent of cells was selected to ensure a sample number of five or more for analysis. Quality-control samples included blanks, spikes, and replicates of laboratory water and spikes, replicates, and 2-lab splits of groundwater. Two analytical laboratories, the PaDEP Laboratory and a U.S. Geological Survey Laboratory, were part of the design. Bias and variability were evaluated by use of data collected from October 1997 through January 1998 for alachlor, atrazine, cyanazine, metolachlor, simazine, pendimethalin, metribuzin, and chlorpyrifos.Results of analyses of field blanks indicate that collection, processing, transport, and laboratory analysis procedures did not contaminate the samples; there were no false-positive results. Pesticides were detected in water when pesticides were spiked into (added to) samples. There were no false negatives for the eight pesticides in all spiked samples. Negative bias was characteristic of analytical results for the eight pesticides, and bias was generally in excess of 10 percent from the ‘true’ or expected concentration (34 of 39 analyses, or 87 percent of the ground-water results) for pesticide concentrations ranging from 0.31 to 0.51 mg/L (micrograms per liter). The magnitude of the negative bias for the eight pesticides, with the exception of cyanazine, would result in reported concentrations commonly 75-80 percent of the expected concentration in the water sample. The bias for cyanazine was negative and within 10 percent of the expected concentration. A comparison of spiked pesticide-concentration recoveries in laboratory water and ground water indicated no effect of the ground-water matrix, and matrix interference was not a source of the negative bias. Results for the laboratory-water spikes submitted in triplicate showed large variability for recoveries of atrazine, cyanazine, and pendimethalin. The relative standard deviation (RSD) was used as a measure of method variability over the course of the study for laboratory waters at a concentration of 0.4 mg/L. An RSD of about 11 percent (or about ?0.05 mg/L)characterizes the method results for alachlor, chlorpyrifos, metolachlor, metribuzin, and simazine. Atrazine and pendimethalin have RSD values of about 17 and 23 percent, respectively. Cyanazine showed the largest RSD at nearly 51 percent. The pesticides with low variability in laboratory-water spikes also had low variability in ground water.The assessment results showed that atrazinewas the most commonly detected pesticide in ground water in the assessment area. Atrazine was detected in water from 22 of the 28 wells sampled, and recovery results for atrazine were some of the worst (largest negative bias). Concentrations of the eight pesticides in ground water from wells were generally less than 0.3 µg/L. Only six individual measurements of the concentrations in water from six of the wells were at or above 0.3 µg/L, five for atrazine and one for metolachlor. There were eight additional detections of metolachlor and simazine at concentrations less than 0.1 µg/L. No well water contained more than one pesticide at concentra-tions at or above 0.3 µg/L. Evidence exists, how-ever, for a pattern of co-occurrence of metolachlor and simazine at low concentrations with higher concentrations of atrazine.Large variability in replicate samples and negative bias for pesticide recovery from spiked samples indicate the need to use data for pesticide recovery in the interpretation of measured pesti-cide concentrations in ground water. Data from samples spiked with known amounts of pesticides were a critical component of a quality-assurance design for the monitoring component of the Pesti-cides and Ground Water Strategy.Trigger concentrations, the concentrations that require action under the Pesticides and Ground Water Strategy, should be considered maximums for action. This consideration is needed because of the magnitude of negative bias.

  5. Framing From Experience: Cognitive Processes and Predictions of Risky Choice.

    PubMed

    Gonzalez, Cleotilde; Mehlhorn, Katja

    2016-07-01

    A framing bias shows risk aversion in problems framed as "gains" and risk seeking in problems framed as "losses," even when these are objectively equivalent and probabilities and outcomes values are explicitly provided. We test this framing bias in situations where decision makers rely on their own experience, sampling the problem's options (safe and risky) and seeing the outcomes before making a choice. In Experiment 1, we replicate the framing bias in description-based decisions and find risk indifference in gains and losses in experience-based decisions. Predictions of an Instance-Based Learning model suggest that objective probabilities as well as the number of samples taken are factors that contribute to the lack of framing effect. We test these two factors in Experiment 2 and find no framing effect when a few samples are taken but when large samples are taken, the framing effect appears regardless of the objective probability values. Implications of behavioral results and cognitive modeling are discussed. Copyright © 2015 Cognitive Science Society, Inc.

  6. High-Order Finite-Difference Schemes for Numerical Simulation of Hypersonic Boundary-Layer Transition

    NASA Astrophysics Data System (ADS)

    Zhong, Xiaolin

    1998-08-01

    Direct numerical simulation (DNS) has become a powerful tool in studying fundamental phenomena of laminar-turbulent transition of high-speed boundary layers. Previous DNS studies of supersonic and hypersonic boundary layer transition have been limited to perfect-gas flow over flat-plate boundary layers without shock waves. For hypersonic boundary layers over realistic blunt bodies, DNS studies of transition need to consider the effects of bow shocks, entropy layers, surface curvature, and finite-rate chemistry. It is necessary that numerical methods for such studies are robust and high-order accurate both in resolving wide ranges of flow time and length scales and in resolving the interaction between the bow shocks and flow disturbance waves. This paper presents a new high-order shock-fitting finite-difference method for the DNS of the stability and transition of hypersonic boundary layers over blunt bodies with strong bow shocks and with (or without) thermo-chemical nonequilibrium. The proposed method includes a set of new upwind high-order finite-difference schemes which are stable and are less dissipative than a straightforward upwind scheme using an upwind-bias grid stencil, a high-order shock-fitting formulation, and third-order semi-implicit Runge-Kutta schemes for temporal discretization of stiff reacting flow equations. The accuracy and stability of the new schemes are validated by numerical experiments of the linear wave equation and nonlinear Navier-Stokes equations. The algorithm is then applied to the DNS of the receptivity of hypersonic boundary layers over a parabolic leading edge to freestream acoustic disturbances.

  7. Nature's engineering: Giant magnetic exchange bias > 1T in a natural mineral

    NASA Astrophysics Data System (ADS)

    McEnroe, S. A.; Carter-Stiglitz, B.; Harrison, R. J.; Robinson, P.; McCammon, C.

    2006-12-01

    Magnetic exchange bias is a phenomenon whereby the hysteresis loop of a "soft" magnetic phase is shifted along the applied field axis by an amount of exchange due to interaction with a "hard" magnetic phase. Exchange bias is the subject of intense experimental and theoretical investigation because of its widespread technological applications and recent advances in manipulating nanoscale materials. Understanding the physical origin of exchange bias has been hampered, by the general uncertainty in the crystal and magnetic structure of the interface between hard and soft phases. Here we discuss a natural sample that has one of the largest exchange biases ever reported, nearly 1 Tesla (T) in a 1.5 T field and is the first documented example of exchange bias of this magnitude in a natural mineral. We demonstrate that exchange bias in this system is due to the interaction between coherently intergrown magnetic phases, formed through a natural process of phase separation during slow cooling. These extreme properties are found in a sample of titanohematite (15- 19 percent Ti-substitution ) from the 1 Gyr metamorphic rocks of the Modum district, south Norway. Low temperature magnetic measurements demonstrate the nature of the giant exchange bias. Transmission electron microscopy, electron microprobe analyses combined with Mossbauer measurements, at room and low temperature, are used to identify the interacting phases. The titanohematite contain ilmenite lamellae which are mostly sub-unit cell size. Fe-rutile is also present as an intergrowth phase.

  8. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  9. The morphological state space revisited: what do phylogenetic patterns in homoplasy tell us about the number of possible character states?

    PubMed Central

    Hoyal Cuthill, Jennifer F.

    2015-01-01

    Biological variety and major evolutionary transitions suggest that the space of possible morphologies may have varied among lineages and through time. However, most models of phylogenetic character evolution assume that the potential state space is finite. Here, I explore what the morphological state space might be like, by analysing trends in homoplasy (repeated derivation of the same character state). Analyses of ten published character matrices are compared against computer simulations with different state space models: infinite states, finite states, ordered states and an ‘inertial' model, simulating phylogenetic constraints. Of these, only the infinite states model results in evolution without homoplasy, a prediction which is not generally met by real phylogenies. Many authors have interpreted the ubiquity of homoplasy as evidence that the number of evolutionary alternatives is finite. However, homoplasy is also predicted by phylogenetic constraints on the morphological distance that can be traversed between ancestor and descendent. Phylogenetic rarefaction (sub-sampling) shows that finite and inertial state spaces do produce contrasting trends in the distribution of homoplasy. Two clades show trends characteristic of phylogenetic inertia, with decreasing homoplasy (increasing consistency index) as we sub-sample more distantly related taxa. One clade shows increasing homoplasy, suggesting exhaustion of finite states. Different clades may, therefore, show different patterns of character evolution. However, when parsimony uninformative characters are excluded (which may occur without documentation in cladistic studies), it may no longer be possible to distinguish inertial and finite state spaces. Interestingly, inertial models predict that homoplasy should be clustered among comparatively close relatives (parallel evolution), whereas finite state models do not. If morphological evolution is often inertial in nature, then homoplasy (false homology) may primarily occur between close relatives, perhaps being replaced by functional analogy at higher taxonomic scales. PMID:26640650

  10. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  11. Design/Analysis of Metal/Composite Bonded Joints for Survivability at Cryogenic Temperatures

    NASA Technical Reports Server (NTRS)

    Bartoszyk, Andrew E.

    2004-01-01

    A major design and analysis challenge for the JWST ISM structure is the metal/composite bonded joints that will be required to survive down to an operational ultra-low temperature of 30K (-405 F). The initial and current baseline design for the plug-type joint consists of a titanium thin walled fitting (1-3mm thick) bonded to the interior surface of an M555/954-6 composite truss square tube with an axially stiff biased lay-up. Metallic fittings are required at various nodes of the truss structure to accommodate instrument and lift-point bolted interfaces. Analytical experience and design work done on metal/composite bonded joints at temperatures below liquid nitrogen are limited and important analysis tools, material properties, and failure criteria for composites at cryogenic temperatures are virtually nonexistent. Increasing the challenge is the difficulty in testing for these required tools and parameters at 30K. A preliminary finite element analysis shows that failure due to CTE mismatch between the biased composite and titanium or aluminum is likely. Failure is less likely with Invar, however an initial mass estimate of Invar fittings demonstrates that Invar is not an automatic alternative. In order to gain confidence in analyzing and designing the ISM joints, a comprehensive joint development testing program has been planned and is currently running. The test program is designed for the correlation of the analysis methodology, including tuning finite element model parameters, and developing a composite failure criterion for the effect of multi-axial composite stresses on the strength of a bonded joint at 30K. The testing program will also consider stress mitigation using compliant composite layers and potential strength degradation due to multiple thermal cycles. Not only will the finite element analysis be correlated to the test data, but the FEA will be used to guide the design of the test. The first phase of the test program has been completed and the preliminary analysis has been revisited based on the test data In this work, we present an overview of the test plan, results today, and resulting design improvements.

  12. Social science. Publication bias in the social sciences: unlocking the file drawer.

    PubMed

    Franco, Annie; Malhotra, Neil; Simonovits, Gabor

    2014-09-19

    We studied publication bias in the social sciences by analyzing a known population of conducted studies--221 in total--in which there is a full accounting of what is published and unpublished. We leveraged Time-sharing Experiments in the Social Sciences (TESS), a National Science Foundation-sponsored program in which researchers propose survey-based experiments to be run on representative samples of American adults. Because TESS proposals undergo rigorous peer review, the studies in the sample all exceed a substantial quality threshold. Strong results are 40 percentage points more likely to be published than are null results and 60 percentage points more likely to be written up. We provide direct evidence of publication bias and identify the stage of research production at which publication bias occurs: Authors do not write up and submit null findings. Copyright © 2014, American Association for the Advancement of Science.

  13. An experimental investigation of recruitment bias in eating pathology research.

    PubMed

    Moss, Erin L; von Ranson, Kristin M

    2006-04-01

    Previous, uncontrolled research has suggested a bias may exist in recruiting participants for eating disorder research. Recruitment biases may affect sample representativeness and generalizability of findings. This experiment investigated whether revealing that a study's topic was related to eating disorders created a self-selection bias. Young women at a university responded to advertisements containing contrasting information about the nature of a single study. We recruited one group by advertising the study under the title "Disordered Eating in Young Women" (n = 251) and another group using the title "Consumer Preferences" (n = 259). Results indicated similar levels of eating pathology in both groups, so the different recruitment techniques did not engender self-selection. However, the consumer preferences group scored higher in self-reported social desirability. The level of information conveyed in study advertising does not impact reporting of eating disturbances among nonclinical samples, although there is evidence social desirability might. 2006 by Wiley Periodicals, Inc.

  14. Causal inference and the data-fusion problem

    PubMed Central

    Bareinboim, Elias; Pearl, Judea

    2016-01-01

    We review concepts, principles, and tools that unify current approaches to causal analysis and attend to new challenges presented by big data. In particular, we address the problem of data fusion—piecing together multiple datasets collected under heterogeneous conditions (i.e., different populations, regimes, and sampling methods) to obtain valid answers to queries of interest. The availability of multiple heterogeneous datasets presents new opportunities to big data analysts, because the knowledge that can be acquired from combined data would not be possible from any individual source alone. However, the biases that emerge in heterogeneous environments require new analytical tools. Some of these biases, including confounding, sampling selection, and cross-population biases, have been addressed in isolation, largely in restricted parametric models. We here present a general, nonparametric framework for handling these biases and, ultimately, a theoretical solution to the problem of data fusion in causal inference tasks. PMID:27382148

  15. A morphological basis for orientation tuning in primary visual cortex.

    PubMed

    Mooser, François; Bosking, William H; Fitzpatrick, David

    2004-08-01

    Feedforward connections are thought to be important in the generation of orientation-selective responses in visual cortex by establishing a bias in the sampling of information from regions of visual space that lie along a neuron's axis of preferred orientation. It remains unclear, however, which structural elements-dendrites or axons-are ultimately responsible for conveying this sampling bias. To explore this question, we have examined the spatial arrangement of feedforward axonal connections that link non-oriented neurons in layer 4 and orientation-selective neurons in layer 2/3 of visual cortex in the tree shrew. Target sites of labeled boutons in layer 2/3 resulting from focal injections of biocytin in layer 4 show an orientation-specific axial bias that is sufficient to confer orientation tuning to layer 2/3 neurons. We conclude that the anisotropic arrangement of axon terminals is the principal source of the orientation bias contributed by feedforward connections.

  16. The Role of Self-reports and Behavioral Measures of Interpretation Biases in Children with Varying Levels of Anxiety.

    PubMed

    Klein, Anke M; Flokstra, Emmelie; van Niekerk, Rianne; Klein, Steven; Rapee, Ronald M; Hudson, Jennifer L; Bögels, Susan M; Becker, Eni S; Rinck, Mike

    2018-04-21

    We investigated the role of self-reports and behavioral measures of interpretation biases and their content-specificity in children with varying levels of spider fear and/or social anxiety. In total, 141 selected children from a community sample completed an interpretation bias task with scenarios that were related to either spider threat or social threat. Specific interpretation biases were found; only spider-related interpretation bias and self-reported spider fear predicted unique variance in avoidance behavior on the Behavior Avoidance Task for spiders. Likewise, only social-threat related interpretation bias and self-reported social anxiety predicted anxiety during the Social Speech Task. These findings support the hypothesis that fearful children display cognitive biases that are specific to particular fear-relevant stimuli. Clinically, this insight might be used to improve treatments for anxious children by targeting content-specific interpretation biases related to individual disorders.

  17. Accuracy and biases in newlyweds' perceptions of each other: not mutually exclusive but mutually beneficial.

    PubMed

    Luo, Shanhong; Snider, Anthony G

    2009-11-01

    There has been a long-standing debate about whether having accurate self-perceptions or holding positive illusions of self is more adaptive. This debate has recently expanded to consider the role of accuracy and bias of partner perceptions in romantic relationships. In the present study, we hypothesized that because accuracy, positivity bias, and similarity bias are likely to serve distinct functions in relationships, they should all make independent contributions to the prediction of marital satisfaction. In a sample of 288 newlywed couples, we tested this hypothesis by simultaneously modeling the actor effects and partner effects of accuracy, positivity bias, and similarity bias in predicting husbands' and wives' satisfaction. Findings across several perceptual domains suggest that all three perceptual indices independently predicted the perceiver's satisfaction. Accuracy and similarity bias, but not positivity bias, made unique contributions to the target's satisfaction. No sex differences were found.

  18. A three-dimensional finite element model of near-field scanning microwave microscopy

    NASA Astrophysics Data System (ADS)

    Balusek, Curtis; Friedman, Barry; Luna, Darwin; Oetiker, Brian; Babajanyan, Arsen; Lee, Kiejin

    2012-10-01

    A three-dimensional finite element model of an experimental near-field scanning microwave microscope (NSMM) has been developed and compared to experiment on non conducting samples. The microwave reflection coefficient S11 is calculated as a function of frequency with no adjustable parameters. There is qualitative agreement with experiment in that the resonant frequency can show a sizable increase with sample dielectric constant; a result that is not obtained with a two-dimensional model. The most realistic model shows a semi-quantitative agreement with experiment. The effect of different sample thicknesses and varying tip sample distances is investigated numerically and shown to effect NSMM performance in a way consistent with experiment. Visualization of the electric field indicates that the field is primarily determined by the shape of the coupling hooks.

  19. Personality in general and clinical samples: Measurement invariance of the Multidimensional Personality Questionnaire.

    PubMed

    Eigenhuis, Annemarie; Kamphuis, Jan H; Noordhof, Arjen

    2017-09-01

    A growing body of research suggests that the same general dimensions can describe normal and pathological personality, but most of the supporting evidence is exploratory. We aim to determine in a confirmatory framework the extent to which responses on the Multidimensional Personality Questionnaire (MPQ) are identical across general and clinical samples. We tested the Dutch brief form of the MPQ (MPQ-BF-NL) for measurement invariance across a general population subsample (N = 365) and a clinical sample (N = 365), using Multiple Group Confirmatory Factor Analysis (MGCFA) and Multiple Group Exploratory Structural Equation Modeling (MGESEM). As an omnibus personality test, the MPQ-BF-NL revealed strict invariance, indicating absence of bias. Unidimensional per scale tests for measurement invariance revealed that 10% of items appeared to contain bias across samples. Item bias only affected the scale interpretation of Achievement, with individuals from the clinical sample more readily admitting to put high demands on themselves than individuals from the general sample, regardless of trait level. This formal test of equivalence provides strong evidence for the common structure of normal and pathological personality and lends further support to the clinical utility of the MPQ. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Accuracy and differential bias in copy number measurement of CCL3L1 in association studies with three auto-immune disorders.

    PubMed

    Carpenter, Danielle; Walker, Susan; Prescott, Natalie; Schalkwijk, Joost; Armour, John Al

    2011-08-18

    Copy number variation (CNV) contributes to the variation observed between individuals and can influence human disease progression, but the accurate measurement of individual copy numbers is technically challenging. In the work presented here we describe a modification to a previously described paralogue ratio test (PRT) method for genotyping the CCL3L1/CCL4L1 copy variable region, which we use to ascertain CCL3L1/CCL4L1 copy number in 1581 European samples. As the products of CCL3L1 and CCL4L1 potentially play a role in autoimmunity we performed case control association studies with Crohn's disease, rheumatoid arthritis and psoriasis clinical cohorts. We evaluate the PRT methodology used, paying particular attention to accuracy and precision, and highlight the problems of differential bias in copy number measurements. Our PRT methods for measuring copy number were of sufficient precision to detect very slight but systematic differential bias between results from case and control DNA samples in one study. We find no evidence for an association between CCL3L1 copy number and Crohn's disease, rheumatoid arthritis or psoriasis. Differential bias of this small magnitude, but applied systematically across large numbers of samples, would create a serious risk of false positive associations in copy number, if measured using methods of lower precision, or methods relying on single uncorroborated measurements. In this study the small differential bias detected by PRT in one sample set was resolved by a simple pre-treatment by restriction enzyme digestion.

  1. Accuracy and differential bias in copy number measurement of CCL3L1 in association studies with three auto-immune disorders

    PubMed Central

    2011-01-01

    Background Copy number variation (CNV) contributes to the variation observed between individuals and can influence human disease progression, but the accurate measurement of individual copy numbers is technically challenging. In the work presented here we describe a modification to a previously described paralogue ratio test (PRT) method for genotyping the CCL3L1/CCL4L1 copy variable region, which we use to ascertain CCL3L1/CCL4L1 copy number in 1581 European samples. As the products of CCL3L1 and CCL4L1 potentially play a role in autoimmunity we performed case control association studies with Crohn's disease, rheumatoid arthritis and psoriasis clinical cohorts. Results We evaluate the PRT methodology used, paying particular attention to accuracy and precision, and highlight the problems of differential bias in copy number measurements. Our PRT methods for measuring copy number were of sufficient precision to detect very slight but systematic differential bias between results from case and control DNA samples in one study. We find no evidence for an association between CCL3L1 copy number and Crohn's disease, rheumatoid arthritis or psoriasis. Conclusions Differential bias of this small magnitude, but applied systematically across large numbers of samples, would create a serious risk of false positive associations in copy number, if measured using methods of lower precision, or methods relying on single uncorroborated measurements. In this study the small differential bias detected by PRT in one sample set was resolved by a simple pre-treatment by restriction enzyme digestion. PMID:21851606

  2. Evaluation of respondent-driven sampling.

    PubMed

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.

  3. Bias modification training can alter approach bias and chocolate consumption.

    PubMed

    Schumacher, Sophie E; Kemps, Eva; Tiggemann, Marika

    2016-01-01

    Recent evidence has demonstrated that bias modification training has potential to reduce cognitive biases for attractive targets and affect health behaviours. The present study investigated whether cognitive bias modification training could be applied to reduce approach bias for chocolate and affect subsequent chocolate consumption. A sample of 120 women (18-27 years) were randomly assigned to an approach-chocolate condition or avoid-chocolate condition, in which they were trained to approach or avoid pictorial chocolate stimuli, respectively. Training had the predicted effect on approach bias, such that participants trained to approach chocolate demonstrated an increased approach bias to chocolate stimuli whereas participants trained to avoid such stimuli showed a reduced bias. Further, participants trained to avoid chocolate ate significantly less of a chocolate muffin in a subsequent taste test than participants trained to approach chocolate. Theoretically, results provide support for the dual process model's conceptualisation of consumption as being driven by implicit processes such as approach bias. In practice, approach bias modification may be a useful component of interventions designed to curb the consumption of unhealthy foods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Race-Related Cognitive Test Bias in the ACTIVE Study: A MIMIC Model Approach

    PubMed Central

    Aiken Morgan, Adrienne T.; Marsiske, Michael; Dzierzewski, Joseph; Jones, Richard N.; Whitfield, Keith E.; Johnson, Kathy E.; Cresci, Mary K.

    2010-01-01

    The present study investigated evidence for race-related test bias in cognitive measures used in the baseline assessment of the ACTIVE clinical trial. Test bias against African Americans has been documented in both cognitive aging and early lifespan studies. Despite significant mean performance differences, Multiple Indicators Multiple Causes (MIMIC) models suggested most differences were at the construct level. There was little evidence that specific measures put either group at particular advantage or disadvantage and little evidence of cognitive test bias in this sample. Small group differences in education, cognitive status, and health suggest positive selection may have attenuated possible biases. PMID:20845121

  5. Modeling and design of Galfenol unimorph energy harvesters

    NASA Astrophysics Data System (ADS)

    Deng, Zhangxian; Dapino, Marcelo J.

    2015-12-01

    This article investigates the modeling and design of vibration energy harvesters that utilize iron-gallium (Galfenol) as a magnetoelastic transducer. Galfenol unimorphs are of particular interest; however, advanced models and design tools are lacking for these devices. Experimental measurements are presented for various unimorph beam geometries. A maximum average power density of 24.4 {mW} {{cm}}-3 and peak power density of 63.6 {mW} {{cm}}-3 are observed. A modeling framework with fully coupled magnetoelastic dynamics, formulated as a 2D finite element model, and lumped-parameter electrical dynamics is presented and validated. A comprehensive parametric study considering pickup coil dimensions, beam thickness ratio, tip mass, bias magnet location, and remanent flux density (supplied by bias magnets) is developed for a 200 Hz, 9.8 {{m}} {{{s}}}-2 amplitude harmonic base excitation. For the set of optimal parameters, the maximum average power density and peak power density computed by the model are 28.1 and 97.6 {mW} {{cm}}-3, respectively.

  6. Phonon assisted carrier motion on the Wannier-Stark ladder

    NASA Astrophysics Data System (ADS)

    Cheung, Alfred; Berciu, Mona

    2014-03-01

    It is well known that at zero temperature and in the absence of electron-phonon coupling, the presence of an electric field leads to localization of carriers residing in a single band of finite bandwidth. In this talk, we will present an implementation of the self-consistent Born approximation (SCBA) to study the effect of weak electron-phonon coupling on the motion of a carrier in a biased system. At moderate and strong electron-phonon coupling, we supplement the SCBA, describing the string of phonons left behind by the carrier, with the momentum average approximation to describe the phonon cloud that accompanies the resulting polaron. We find that coupling to the lattice delocalizes the carrier, as expected, although long-lived resonances resulting from the Wannier-Stark states of the polaron may appear in certain regions of the parameter space. We end with a discussion of how our method can be improved to model disorder, other types of electron-phonon coupling, and electron-hole pair dissociation in a biased system.

  7. MEMS fabrication and frequency sweep for suspending beam and plate electrode in electrostatic capacitor

    NASA Astrophysics Data System (ADS)

    Zhu, Jianxiong; Song, Weixing

    2018-01-01

    We report a MEMS fabrication and frequency sweep for a high-order mode suspending beam and plate layer in electrostatic micro-gap semiconductor capacitor. This suspended beam and plate was designed with silicon oxide (SiO2) film which was fabricated using bulk silicon micromachining technology on both side of a silicon substrate. The designed semiconductor capacitors were driven by a bias direct current (DC) and a sweep frequency alternative current (AC) in a room temperature for an electrical response test. Finite element calculating software was used to evaluate the deformation mode around its high-order response frequency. Compared a single capacitor with a high-order response frequency (0.42 MHz) and a 1 × 2 array parallel capacitor, we found that the 1 × 2 array parallel capacitor had a broader high-order response range. And it concluded that a DC bias voltage can be used to modulate a high-order response frequency for both a single and 1 × 2 array parallel capacitors.

  8. Coarsening dynamics in condensing zero-range processes and size-biased birth death chains

    NASA Astrophysics Data System (ADS)

    Jatuviriyapornchai, Watthanan; Grosskinsky, Stefan

    2016-05-01

    Zero-range processes with decreasing jump rates are well known to exhibit a condensation transition under certain conditions on the jump rates, and the dynamics of this transition continues to be a subject of current research interest. Starting from homogeneous initial conditions, the time evolution of the condensed phase exhibits an interesting coarsening phenomenon of mass transport between cluster sites characterized by a power law. We revisit the approach in Godrèche (2003 J. Phys. A: Math. Gen. 36 6313) to derive effective single site dynamics which form a nonlinear birth death chain describing the coarsening behavior. We extend these results to a larger class of parameter values, and introduce a size-biased version of the single site process, which provides an effective tool to analyze the dynamics of the condensed phase without finite size effects and is the main novelty of this paper. Our results are based on a few heuristic assumptions and exact computations, and are corroborated by detailed simulation data.

  9. One-step leapfrog ADI-FDTD method for simulating electromagnetic wave propagation in general dispersive media.

    PubMed

    Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David

    2013-09-09

    The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.

  10. Differential effects of weight bias experiences and internalization on exercise among women with overweight and obesity.

    PubMed

    Pearl, Rebecca L; Puhl, Rebecca M; Dovidio, John F

    2015-12-01

    This study investigated the effects of experiences with weight stigma and weight bias internalization on exercise. An online sample of 177 women with overweight and obesity (M(age) = 35.48 years, M(BMI) = 32.81) completed questionnaires assessing exercise behavior, self-efficacy, and motivation; experiences of weight stigmatization; weight bias internalization; and weight-stigmatizing attitudes toward others. Weight stigma experiences positively correlated with exercise behavior, but weight bias internalization was negatively associated with all exercise variables. Weight bias internalization was a partial mediator between weight stigma experiences and exercise behavior. The distinct effects of experiencing versus internalizing weight bias carry implications for clinical practice and public health. © The Author(s) 2014.

  11. An 8-node tetrahedral finite element suitable for explicit transient dynamic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Key, S.W.; Heinstein, M.W.; Stone, C.M.

    1997-12-31

    Considerable effort has been expended in perfecting the algorithmic properties of 8-node hexahedral finite elements. Today the element is well understood and performs exceptionally well when used in modeling three-dimensional explicit transient dynamic events. However, the automatic generation of all-hexahedral meshes remains an elusive achievement. The alternative of automatic generation for all-tetrahedral finite element is a notoriously poor performer, and the 10-node quadratic tetrahedral finite element while a better performer numerically is computationally expensive. To use the all-tetrahedral mesh generation extant today, the authors have explored the creation of a quality 8-node tetrahedral finite element (a four-node tetrahedral finite elementmore » enriched with four midface nodal points). The derivation of the element`s gradient operator, studies in obtaining a suitable mass lumping and the element`s performance in applications are presented. In particular, they examine the 80node tetrahedral finite element`s behavior in longitudinal plane wave propagation, in transverse cylindrical wave propagation, and in simulating Taylor bar impacts. The element only samples constant strain states and, therefore, has 12 hourglass modes. In this regard, it bears similarities to the 8-node, mean-quadrature hexahedral finite element. Given automatic all-tetrahedral meshing, the 8-node, constant-strain tetrahedral finite element is a suitable replacement for the 8-node hexahedral finite element and handbuilt meshes.« less

  12. Attentional bias to threat in the general population is contingent on target competition, not on attentional control settings.

    PubMed

    Wirth, Benedikt Emanuel; Wentura, Dirk

    2018-04-01

    Dot-probe studies usually find an attentional bias towards threatening stimuli only in anxious participants. Here, we investigated under what conditions such a bias occurs in unselected samples. According to contingent-capture theory, an irrelevant cue only captures attention if it matches an attentional control setting. Therefore, we first tested the hypothesis that an attentional control setting tuned to threat must be activated in (non-anxious) individuals. In Experiment 1, we used a dot-probe task with a manipulation of attentional control settings ('threat' - set vs. control set). Surprisingly, we found an (anxiety-independent) attentional bias to angry faces that was not moderated by attentional control settings. Since we presented two stimuli (i.e., a target and a distractor) on the target screen in Experiment 1 (a necessity to realise the test of contingent capture), but most dot-probe studies only employ a single target, we conducted Experiment 2 to test the hypothesis that attentional bias in the general population is contingent on target competition. Participants performed a dot-probe task, involving presentation of a stand-alone target or a target competing with a distractor. We found an (anxiety-independent) attentional bias towards angry faces in the latter but not the former condition. This suggests that attentional bias towards angry faces in unselected samples is not contingent on attentional control settings but on target competition.

  13. A proposed experimental diagnosing of specular Andreev reflection using the spin orbit interaction

    PubMed Central

    Yang, Yanling; Zhao, Bing; Zhang, Ziyu; Bai, Chunxu; Xu, Xiaoguang; Jiang, Yong

    2016-01-01

    Based on the Dirac-Bogoliubov-de Gennes equation, we theoretically investigate the chirality-resolved transport properties through a superconducting heterojunction in the presence of both the Rashba spin orbit interaction (RSOI) and the Dresselhaus spin orbit interaction (DSOI). Our results show that, if only the RSOI is present, the chirality-resolved Andreev tunneling conductance can be enhanced in the superconducting gap, while it always shows a suppression effect for the case of the DSOI alone. In contrast to the similar dependence of the specular Andreev zero bias tunneling conductance on the SOI, the retro-Andreev zero bias tunneling conductance exhibit the distinct dependence on the RSOI and the DSOI. Moreover, the zero-bias tunneling conductances for the retro-Andreev reflection (RAR) and the specular Andreev reflection (SAR) also show a qualitative difference with respect to the barrier parameters. When the RSOI and the DSOI are finite, three orders of magnitude enhancement of specular Andreev tunneling conductance is revealed. Furthermore, by analyzing the balanced SOI case, we find that the RAR is in favor of a parabolic dispersion, but a linear dispersion is highly desired for the SAR. These results shed light on the diagnosing of the SAR in graphene when subjected to both kinds of SOI. PMID:27388426

  14. Magnetization distribution and spin transport of graphene/h-BN/graphene nanoribbon-based magnetic tunnel junction

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Yan, X. H.; Guo, Y. D.; Xiao, Y.

    2017-09-01

    Motivated by recent electronic transport measurement of boron nitride-graphene hybrid atomic layers, we studied magnetization distribution, transmission and current-bias relation of graphene/h-BN/graphene (C/BN/C) nanoribbon-based magnetic tunnel junctions (MTJ) based on density functional theory and non-equilibrium Green's function methods. Three types of MTJs, i.e. asymmetric, symmetric (S) and symmetric (SS), and two types of lead magnetization alignment, i.e. parallel (PC) and antiparallel (APC), are considered. The results show that the magnetization distribution is closely related to the interface structure. Especially for asymmetric MTJ, the B/N atoms at the C/BN interface are spin-polarized and give finite magnetic moments. More interesting, it is found that the APC transmission of asymmetric MTJ with the thinnest barrier dominates over the PC one. By analyzing the projected density of states, one finds that the unusual higher APC transmission than PC is due to the coupling of electronic states of left ZGNR and right ZGNR. By integrating transmission, we calculate the current-bias voltage relation and find that the APC current is larger than PC current at small bias voltage and therefore reproduces a negative tunnel magnetoresistance. The results reported here will be useful and important for the design of C/BN/C-based MTJ.

  15. Measurement of magnetostatic mode excitation and relaxation in permalloy films using scanning Kerr imaging

    NASA Astrophysics Data System (ADS)

    Tamaru, S.; Bain, J. A.; van de Veerdonk, R. J. M.; Crawford, T. M.; Covington, M.; Kryder, M. H.

    2004-09-01

    This work presents experimental results of magnetostatic mode excitation using scanning Kerr microscopy under continuous sinusoidal excitation in the microwave frequency range. This technique was applied to 100nm thick permalloy coupons excited in two different ways. In the first experiment, the uniform (Kittel) mode was excited at frequencies in 2.24-8.00GHz . The resonant condition was effectively described with the conventional Kittel mode equation. The LLG damping parameter α increased significantly with decreasing bias field. It was confirmed that this increase was caused by multidomain structure and ripple domains formed under weak bias fields, as suggested by other studies. In the second experiment, propagating magnetostatic mode surface waves were excited. They showed an exponential amplitude decay and a linear phase variation with distance from the drive field source, consistent with a decaying plane wave. The Damon-Eshbach (DE) model was extended to include a finite energy damping and used to analyze the results. It was found that the wave number and the decay constant were reasonably well described by the extended DE model. In contrast to the first experiment, no significant variation of α with frequency or bias field was seen in this second experiment, where spatial inhomogeneities in the magnetization are less significant.

  16. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    PubMed

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.

  17. Investigations of potential bias in the estimation of lambda using Pradel's (1996) model for capture-recapture data

    USGS Publications Warehouse

    Hines, James E.; Nichols, James D.

    2002-01-01

    Pradel's (1996) temporal symmetry model permitting direct estimation and modelling of population growth rate, u i , provides a potentially useful tool for the study of population dynamics using marked animals. Because of its recent publication date, the approach has not seen much use, and there have been virtually no investigations directed at robustness of the resulting estimators. Here we consider several potential sources of bias, all motivated by specific uses of this estimation approach. We consider sampling situations in which the study area expands with time and present an analytic expression for the bias in u i We next consider trap response in capture probabilities and heterogeneous capture probabilities and compute large-sample and simulation-based approximations of resulting bias in u i . These approximations indicate that trap response is an especially important assumption violation that can produce substantial bias. Finally, we consider losses on capture and emphasize the importance of selecting the estimator for u i that is appropriate to the question being addressed. For studies based on only sighting and resighting data, Pradel's (1996) u i ' is the appropriate estimator.

  18. Stereotypical images and implicit weight bias in overweight/obese people

    PubMed Central

    Hinman, Nova G.; Burmeister, Jacob M.; Hoffmann, Debra A.; Ashrafioun, Lisham; Koball, Afton M.

    2013-01-01

    Purpose In this brief report, an unanswered question in implicit weight bias research is addressed: Is weight bias stronger when obese and thin people are pictured engaging in stereotype consistent behaviors (e.g., obese—watching TV/eating junk food; thin—exercising/eating healthy) as opposed to the converse? Methods Implicit Associations Test (IAT) data were collected from two samples of overweight/obese adults participating in weight loss treatment. Both samples completed two IATs. In one IAT, obese and thin people were pictured engaging in stereotype consistent behaviors (e.g., obese—watching TV/eating junk food; thin—exercising/eating healthy). In the second IAT, obese and thin people were pictured engaging in stereotype inconsistent behaviors (e.g., obese—exercising/eating healthy; thin—watching TV/eating junk food). Results Implicit weight bias was evident regardless of whether participants viewed stereotype consistent or inconsistent pictures. However, implicit bias was significantly stronger for stereotype consistent compared to stereotype inconsistent images. Conclusion Implicit anti-fat attitudes may be connected to the way in which people with obesity are portrayed. PMID:24057679

  19. Optical and Electrical Performance of MOS-Structure Silicon Solar Cells with Antireflective Transparent ITO and Plasmonic Indium Nanoparticles under Applied Bias Voltage.

    PubMed

    Ho, Wen-Jeng; Sue, Ruei-Siang; Lin, Jian-Cheng; Syu, Hong-Jang; Lin, Ching-Fuh

    2016-08-10

    This paper reports impressive improvements in the optical and electrical performance of metal-oxide-semiconductor (MOS)-structure silicon solar cells through the incorporation of plasmonic indium nanoparticles (In-NPs) and an indium-tin-oxide (ITO) electrode with periodic holes (perforations) under applied bias voltage. Samples were prepared using a plain ITO electrode or perforated ITO electrode with and without In-NPs. The samples were characterized according to optical reflectance, dark current voltage, induced capacitance voltage, external quantum efficiency, and photovoltaic current voltage. Our results indicate that induced capacitance voltage and photovoltaic current voltage both depend on bias voltage, regardless of the type of ITO electrode. Under a bias voltage of 4.0 V, MOS cells with perforated ITO and plain ITO, respectively, presented conversion efficiencies of 17.53% and 15.80%. Under a bias voltage of 4.0 V, the inclusion of In-NPs increased the efficiency of cells with perforated ITO and plain ITO to 17.80% and 16.87%, respectively.

  20. Optical and Electrical Performance of MOS-Structure Silicon Solar Cells with Antireflective Transparent ITO and Plasmonic Indium Nanoparticles under Applied Bias Voltage

    PubMed Central

    Ho, Wen-Jeng; Sue, Ruei-Siang; Lin, Jian-Cheng; Syu, Hong-Jang; Lin, Ching-Fuh

    2016-01-01

    This paper reports impressive improvements in the optical and electrical performance of metal-oxide-semiconductor (MOS)-structure silicon solar cells through the incorporation of plasmonic indium nanoparticles (In-NPs) and an indium-tin-oxide (ITO) electrode with periodic holes (perforations) under applied bias voltage. Samples were prepared using a plain ITO electrode or perforated ITO electrode with and without In-NPs. The samples were characterized according to optical reflectance, dark current voltage, induced capacitance voltage, external quantum efficiency, and photovoltaic current voltage. Our results indicate that induced capacitance voltage and photovoltaic current voltage both depend on bias voltage, regardless of the type of ITO electrode. Under a bias voltage of 4.0 V, MOS cells with perforated ITO and plain ITO, respectively, presented conversion efficiencies of 17.53% and 15.80%. Under a bias voltage of 4.0 V, the inclusion of In-NPs increased the efficiency of cells with perforated ITO and plain ITO to 17.80% and 16.87%, respectively. PMID:28773801

Top